Search Results

Search found 22725 results on 909 pages for 'case sensitivity'.

Page 147/909 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Use your iPhone or iPod Touch as a Boxee Remote

    - by DigitalGeekery
    Are you a Boxee user looking for a remote control solution? Well, you might not need to look any further than your pocket. The free Boxee Remote App turns your iPhone or iPod Touch into a a simple and easy-to-use Boxee remote. The Boxee Remote App works over WiFi, so there is no need for to buy or install additional hardware on your PC. Plus, you don’t even need to be within the line of sight for it to work. Using the Boxee Remote App Download the free Boxee Remote App from the App Store and install it on your iPhone or iPod Touch. See download link below. Next, make sure you have Boxee running on your PC. Select the Boxee icon to open the App.   The first time you log in you’ll be greeted by an introduction screen that will explain the two modes. Click Continue. When opened in “Button” mode, you’ll be presented with 4 directional buttons, an “OK” button, and a back arrow button that works like the Esc key does in Boxee. Button mode performs just as a normal remote. Touching the directional buttons moves your on screen selection right, left, up, and down. Tap the OK button to open or select an item. To enter “Gesture” mode, tap the Gesture button along the top of the Screen. Gesture mode works similar to a touch pad or trackball on a laptop. You drag the Boxee icon with your thumb or finger across the screen to move around within Boxee. The icon will turn red while being dragged or touched. Simply tap the icon to select.   The Settings button allows you to manually add or delete a host computer, or adjust the sensitivity of the controls.     If you need to enter text, such as enter logon credentials for an App, the on screen keyboard will pop up. While watching a video you’ll have on-screen Stop and Pause buttons along with a volume slider.   The Boxee Remote App is simple and easy to use. As long as you can connect via WiFi, you can use it to control any instance of Boxee running on any computer on your network. Download the Boxee Remote App Similar Articles Productive Geek Tips Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Getting Started with BoxeeIntegrate Boxee with Media Center in Windows 7Watch Netflix Instant Movies in BoxeeWin a Free iPod Touch in the How-To Geek Facebook Giveaway! TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Zend framework "invalid controller" error message

    - by stef
    When I load the homepage of a ZF site I get a message saying "Invalid controller specified (error)" where "error" seems to be the name of the controller. In my bootstrap.php I have the snippet below where I added the prints statement in the catch(): // Dispatch the request using the front controller. try { $frontController->dispatch(); } catch (Exception $exception) { print $exception; exit; exit($exception->getMessage()); } This prints out: exception 'Zend_Controller_Dispatcher_Exception' with message 'Invalid controller specified (error)' in /www/common/ZendFramework/library16/Zend/Controller/Dispatcher/Standard.php:241 Stack trace: #0 /www/common/ZendFramework/library16/Zend/Controller/Front.php(934): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #1 /www/site.com/htdocs/application/bootstrap.php(38): Zend_Controller_Front->dispatch() #2 /www/site.com/htdocs/public/index.php(16): require('/www/site....') #3 {main} Can anyone make sense out of what it going on? I have a hunch it has something to do with case sensitivity and the naming conventions of ZF. The initial "invalid controller" message comes from the snippet below print $request->getControllerName(); if (!$this->isDispatchable($request)) { $controller = $request->getControllerName(); if (!$this->getParam('useDefaultControllerAlways') && !empty($controller)) { require_once 'Zend/Controller/Dispatcher/Exception.php'; throw new Zend_Controller_Dispatcher_Exception('Invalid controller specified (' . $request->getControllerName() . ')'); } $className = $this->getDefaultControllerClass($request); } This all prints out "indexerrorInvalid controller specified (error)" so it looks like it's trying to load the index controller first, can not and then has a problem loading the error controller. Could it just be that the path to the controller files is wrong?

    Read the article

  • attachment_fu and RMagick

    - by trobrock
    After finally getting RMagick installed on my Mac I have set up attachment_fu according to the tutorial here: http://clarkware.com/cgi/blosxom/2007/02/24#FileUploadFu&gt when I try and upload a file via the upload form I get around 80 messages like these: /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/lib/RMagick.rb:44: warning: already initialized constant PercentGeometry /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/lib/RMagick.rb:45: warning: already initialized constant AspectGeometry /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/lib/RMagick.rb:46: warning: already initialized constant LessGeometry /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/lib/RMagick.rb:47: warning: already initialized constant GreaterGeometry I did some searching and found that this problem can arise when you require RMagick twice in an application using different casing for the require statement: http://work.rowanhick.com/2007/12/19/require-rmagick-and-case-sensitivity/ I am not requiring it myself, but I was thinking maybe with the config.gem "rmagick" line in my environment.rb file rails might be requiring it. After the form submits it gives me a validation error of: Content type is not included in the list I have checked the source for attachement_fu and found the image/png in the list of content types so I don't believe that is the proper error message: http://github.com/technoweenie/attachment_fu/blob/master/lib/technoweenie/attachment_fu.rb Does anyone have any ideas on how I can get this to work?

    Read the article

  • How to draw line inside a scatter plot

    - by ruffy
    I can't believe that this is so complicated but I tried and googled for a while now. I just want to analyse my scatter plot with a few graphical features. For starters, I want to add simply a line. So, I have a few (4) points and like in this plot [1] I want to add a line to it. http://en.wikipedia.org/wiki/File:ROC_space-2.png [1] Now, this won't work. And frankly, the documentation-examples-gallery combo and content of matplotlib is a bad source for information. My code is based upon a simple scatter plot from the gallery: # definitions for the axes left, width = 0.1, 0.85 #0.65 bottom, height = 0.1, 0.85 #0.65 bottom_h = left_h = left+width+0.02 rect_scatter = [left, bottom, width, height] # start with a rectangular Figure fig = plt.figure(1, figsize=(8,8)) axScatter = plt.axes(rect_scatter) # the scatter plot: p1 = axScatter.scatter(x[0], y[0], c='blue', s = 70) p2 = axScatter.scatter(x[1], y[1], c='green', s = 70) p3 = axScatter.scatter(x[2], y[2], c='red', s = 70) p4 = axScatter.scatter(x[3], y[3], c='yellow', s = 70) p5 = axScatter.plot([1,2,3], "r--") plt.legend([p1, p2, p3, p4, p5], [names[0], names[1], names[2], names[3], "Random guess"], loc = 2) # now determine nice limits by hand: binwidth = 0.25 xymax = np.max( [np.max(np.fabs(x)), np.max(np.fabs(y))] ) lim = ( int(xymax/binwidth) + 1) * binwidth axScatter.set_xlim( (-lim, lim) ) axScatter.set_ylim( (-lim, lim) ) xText = axScatter.set_xlabel('FPR / Specificity') yText = axScatter.set_ylabel('TPR / Sensitivity') bins = np.arange(-lim, lim + binwidth, binwidth) plt.show() Everything works, except the p5 which is a line. Now how is this supposed to work? What's good practice here?

    Read the article

  • Correct method to search for AD user by email address from .NET

    - by BrianLy
    I'm having some issues with code that is intended to find a user in Active Directory by searching on their email address. I have tried 2 methods but I'm sometimes finding that the FindOne() method will not return any results on some occasions. If I look up the user in the GAL in Outlook I see the SMTP email address listed. My end goal is to confirm that the user exists in AD. I only have the email address as search criteria, so no way to use first or last name. Method 1: Using mail property: DirectorySearcher search = new DirectorySearcher(entry); search.Filter = "(mail=" + email + ")"; search.PropertiesToLoad.Add("mail"); SearchResult result = search.FindOne(); Method 2: proxyAddresses property: DirectorySearcher search = new DirectorySearcher(entry); search.Filter = "(proxyAddresses=SMTP:" + email + ")"; // I've also tried with =smtp: search.PropertiesToLoad.Add("mail"); SearchResult result = search.FindOne(); I've tried changing the case of the email address input but it still does not return a result. Is there a problem here with case sensitivity? If so, what is the best way to resolve it?

    Read the article

  • Are local variables in Fortran 77 static or stack dynamic?

    - by mm2887
    For my programming languages class one hw problem asks: Are local variables in FORTRAN static or stack dynamic? Are local variables that are INITIALIZED to a default value static or stack dynamic? Show me some code with an explanation to back up your answer. Hint: The easiest way to check this is to have your program test the history sensitivity of a subprogram. Look at what happens when you initialize the local variable to a value and when you don’t. You may need to call more than one subprogram to lock in your answer with confidence. I wrote a few subroutines: - create a variable - print the variable - initialize the variable to a value - print the variable again Each successive call to the subroutine prints out the same random value for the variable when it is uninitialized and then it prints out the initialized value. What is this random value when the variable is uninitialized? Does this mean Fortran uses the same memory location for each call to the subroutine or it dynamically creates space and initializes the variable randomly? My second subroutine also creates a variable, but then calls the first subroutine. The result is the same except the random number printed of the uninitialized variable is different. I am very confused. Please help! Thank you so much.

    Read the article

  • I deleted a full set of columns in a save, but have the original larger sheet, Can I get that back?

    - by Ben Henley
    I have an original sheet that has over 39000 lines in it. I knocked it down to 1800 lines that I want to improt into my database, however. The Dumb#$$ that I am, I selected only the visible cells and killed like 10 columns that I need. Is there a way to compare to the original sheet using a specific column... i.e. SKU and pull the data from the origianal to put back in the missing columns or do I have to just re-edit the whole thing down again. Please help as this takes a good day or two to minimize. Any and all help is much appreciated. Below is the column list on the edited sheet vs. the original sheet. SKU DESCRIPTION VENDOR PART # RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG BLISH VENDOR # FINE LINE CLASS ITEM ACTION FLAG PRIMARY UPC STOCK U/M WEIGHT LENGTH WIDTH HEIGHT SHIP-VIA EXCLUSION HAZARDOUS CODE PRICE SUGGESTED RETAIL SKU DESCRIPTION RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG FINE LINE CLASS HAZARDOUS CODE PRICE SUGGESTED RETAIL RETAIL SENSITIVITY CODE 2ND UPC CODE 3RD UPC CODE 4TH UPC CODE HEADLINE BULLET #1 BULLET #2 BULLET #3 BULLET #4 BULLET #5 BULLET #6 BULLET #7 BULLET #8 BULLET #9 SIZE COLOR CASE QUANTITY PRODUCT LINE Thanks, Ben

    Read the article

  • How does this code break the Law of Demeter?

    - by Dave Jarvis
    The following code breaks the Law of Demeter: public class Student extends Person { private Grades grades; public Student() { } /** Must never return null; throw an appropriately named exception, instead. */ private synchronized Grades getGrades() throws GradesException { if( this.grades == null ) { this.grades = createGrades(); } return this.grades; } /** Create a new instance of grades for this student. */ protected Grades createGrades() throws GradesException { // Reads the grades from the database, if needed. // return new Grades(); } /** Answers if this student was graded by a teacher with the given name. */ public boolean isTeacher( int year, String name ) throws GradesException, TeacherException { // The method only knows about Teacher instances. // return getTeacher( year ).nameEquals( name ); } private Grades getGradesForYear( int year ) throws GradesException { // The method only knows about Grades instances. // return getGrades().getForYear( year ); } private Teacher getTeacher( int year ) throws GradesException, TeacherException { // This method knows about Grades and Teacher instances. A mistake? // return getGradesForYear( year ).getTeacher(); } } public class Teacher extends Person { public Teacher() { } /** * This method will take into consideration first name, * last name, middle initial, case sensitivity, and * eventually it could answer true to wild cards and * regular expressions. */ public boolean nameEquals( String name ) { return getName().equalsIgnoreCase( name ); } /** Never returns null. */ private synchronized String getName() { if( this.name == null ) { this.name == ""; } return this.name; } } Questions How is the LoD broken? Where is the code breaking the LoD? How should the code be written to uphold the LoD?

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • MySQL FULLTEXT aggravation

    - by southof40
    Hi - I'm having problems with case-sensitivity in MySQL FULLTEXT searches. I've just followed the FULLTEXT example in the MySQL doco at http://dev.mysql.com/doc/refman/5.1/en/fulltext-boolean.html . I'll post it here for ease of reference ... CREATE TABLE articles ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, title VARCHAR(200), body TEXT, FULLTEXT (title,body) ); INSERT INTO articles (title,body) VALUES ('MySQL Tutorial','DBMS stands for DataBase ...'), ('How To Use MySQL Well','After you went through a ...'), ('Optimizing MySQL','In this tutorial we will show ...'), ('1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'), ('MySQL vs. YourSQL','In the following database comparison ...'), ('MySQL Security','When configured properly, MySQL ...'); SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); ... my problem is that the example shows that SELECT returning the first and fifth rows ('..DataBase..' and '..database..') but I only get one row ('database') ! The example doesn't demonstrate what collation the table in the example had but I have ended up with latin1_general_cs on the title and body columns of my example table. My version of MySQL is 5.1.39-log and the connection collation is utf8_unicode_ci . I'd be really grateful is someone could suggest why my experience differs from the example in the manual ! Be grateful for any advice.

    Read the article

  • Unobtrusive, self-hosted comments function to put onto existing web pages

    - by Pekka
    I am building a new site which will consist of a mix of dynamic and static pages. I would like to add commenting functionality to those pages with as little work as possible. I'm curious as to whether such a solution exists in PHP. The ideal set of features would be: Completely independent from the surrounding page / site: PHP code gets dropped into page, a page ID is added, done. Simple "write a comment" form Comments for each page are displayed using a PHP function Nice, clean output of <ul><li>.... that can be styled by the surrounding site Optional Captcha Optional Gravatar sensitivity Minimalistic administration area to moderate/delete comments, no ACL, can protect it using .htaccess The ideal integreation would be like this: <?php show_comments("my_page_name"); ?> this would 1. display a form to add a new comment that gets automatically associtated with my_page_name; and 2. display all comments that were made through this form using this ID. Does anybody know a solution like this? Bounty I am setting up a bounty because while there were some good suggestions, they all point to external services. I'm really curious to see whether there isn't anything self-hosted around. If this doesn't exist yet, it sure would be great to see as an Open Source project.

    Read the article

  • Previous layer not showing

    - by meep
    Hello I got a menu which can foldout and show some content based on the menu item. See demo: http://arcticbusinessnetwork.com.web18.curanetserver.dk/home.aspx If I hover the menu it works great, but if I click on "Raw Material", I can not access the PREVIOUS tab (Infrastructure) while the others fade in great. How can this be? This is the javascript <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> <script type="text/javascript" src="/js/jquery.hoverIntent.min.js"></script> <script type="text/javascript"> $(document).ready(function () { function megaHoverOver() { $(this).find(".menu_content").stop().fadeTo('fast', 1).show(); } function megaHoverOut() { $(this).find(".menu_content").stop().fadeTo('fast', 0, function () { $(this).hide(); }); } var config = { sensitivity: 2, interval: 50, over: megaHoverOver, timeout: 300, out: megaHoverOut }; $("#menu ul li").not(".parenttocurrent").not(".current").find(".menu_content").css({ 'opacity': '0' }); $("#menu ul li").not(".parenttocurrent").not(".current").hoverIntent(config); }); </script>

    Read the article

  • Login as SYS user to Oracle 11g from .NET

    - by Jens Bannmann
    Using the Oracle Data Provider for .NET, my application connects to the database using the privileged SYS user. The connection string is as follows: Data Source=MyTnsName;User ID=sys;Password=MySysPassword;DBA Privilege=SYSDBA This works fine with Oracle 10, but Oracle 11 keeps complaining about an invalid username or password. I verified that the password is correct - other apps work fine with the same credentials. Note that for regular users (without the DBA Privilege part), connecting to Oracle 11 works perfectly. So, what's wrong? Update: This is not an issue with case sensitivity - when constructing the connection string, the password case is not altered by my code, and the password works fine with other, non-.NET-applications. I suspect that this might be caused by the Oracle 10 client I'm using to connect to the 11 database. Oracle states that the client is upward-compatible, the only drawback being that you cannot use some new features of the database. However, SYSDBA connections clearly are not a new Oracle 11 feature, and - again - a non-.NET-app (Keeptool Hora) can connect using the same setup. Any other ideas? Update 2: The problem persists when using an Oracle 11 client :-(

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • How to Implement Rich Document Editor for iPhone

    - by benjismith
    I'm just getting started on a new iPhone/iPad development project, and I need to display a document with rich styled text (potentially with embedded images). The user will touch the document, dragging to highlight individual words or multiline text spans. When the text is highlighted, a context menu will appear, letting them change the color of highlighting or add margin notes (or other various bits of structured metadata). If you're familiar with adding comments to a Word document (or annotating a PDF), then this is the same sort of thing. But in my case, the typical user will spend many many hours within the app, adding thousands (in some cases, tens of thousands) of small annotations to the central document. All of those bits of metadata will be stored locally awaiting synchronization with a remote web service. I've read other pieces of advice, where developers suggest creating a UIWebView control and passing it an HTML string. But that seems kind of clunky, especially with all the context-sensitivity that I want to include. Anyhow, I'm brand new to iPhone development and Objective-C, though I have ten years of software development experience, using a variety of languages on many different platforms, so I'm not worried about getting my hands dirty writing new functionality from scratch. But if anyone has experience building a similar kind of component, I'm interested in hearing strategies for enabling that kind of rich document markup and annotation.

    Read the article

  • Set up two IP addresses with one gateway?

    - by Ahmed
    I would like to ask if it is possible to set up two static IPs from same subnet through one gateway? and How if it is? What I am interested in is described here Routing for multiple uplinks/providers, but in my case I have two IP addresses from one provider, both are on same subnet and off course I have internet access on both. I have two NICs, but I don't mind to go with one if that makes it possible. Any thought is appreciated!

    Read the article

  • creating a contact listener with sprites from different classes

    - by wilM
    I've been trying to set a contact listener that creates a joint on contact between two sprites which have their own individual classes. Both sprites are inheriting from NSObject and their are initialized in their parentlayer (init method of HelloWorldLayer.mm). It is quite straightforward when everything is in the same class, but in a case like this where sprites have their own classes how will it be done. Please Help, I've been at it for days.

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • This task is currently locked by a running workflow and cannot be edited. Limitation to both Nintex and SPD workflow

    - by ybbest
    Note, this post is from Nintex Forum here. These limitations apply to both SharePoint designer Workflow and Nintex Workflow as Nintex using the SharePoint workflow engine. The common cause that I experience is that ‘parent’ workflow is generating more than one task at once. This is common as you can have multiple approvers for certain approval process. You could also have workflow running when the task is created, one of the common scenario is you would like to set a custom column value in your approval task. For me this is huge limitation, as Nintex lover I really hope Nintex could solve this problem with Microsoft going forward. Introduction “This task is currently locked by a running workflow and cannot be edited” is a common message that is seen when an error occurs while the SharePoint workflow engine is processing a task item associated with a workflow. When a workflow processes a task normally, the following sequence of events is expected to occur: 1.       The process begins. 2.       The workflow places a ‘lock’ on the task so nothing else can change the values while the workflow is processing. 3.       The workflow processes the task. 4.       The lock is released when the task processing is finished. When the message is encountered, it usually indicates that an error occurred between step 2 and 4. As a result, the lock is never released. Therefore, the ‘task locked’ message is not an error itself, rather a symptom of another error – the ‘task locked’ message does not indicate what went wrong. In most cases, once this message is encountered, the workflow cannot be made to continue and must be terminated and started again. The following is a guide that can help troubleshoot the cause of these messages.  Some initial observations to narrow down the potential causes are: Is the error consistent or intermittent? When the error is consistent, it will happen every time the workflow is run. When it is intermittent, it may happen regularly, but not every time. Does the error occur the first time the user tries to respond to a task, or do they respond and notice the workflow does not continue, and when they respond again the error occurs? If the message is present when the user first responds to the task, the issue would have occurred when the task was created. Otherwise, it would have occurred when the user attempted to respond to the task. Causes Modifying the task list A cause of this error appearing consistently the first time a user tries to respond to a task is a modification to the default task list schema. For example, changing the ‘Assigned to’ field in a task list to be a multiple selection will cause the behaviour. Deleting the workflow task then restoring it from the Recycle bin If you start a workflow, delete the workflow task then restore it from the Recycle Bin in SharePoint, the workflow will fail with the ‘task locked’ error.  This is confirmed behaviour whether using a SharePoint Designer or a Nintex workflow.  You will need to terminate the workflow and start it again. Parallel simultaneous responses A cause of this error appearing inconsistently is multiple users responding to tasks in parallel at the same time. In this scenario, one task will complete correctly and the other will not process. When the user tries again, the ‘task locked’ message will display. Nintex included a workaround for this issue in build 11000. In build 11000 and later, one of the users will receive a message on the task form when they attempt to respond, stating that they need to try again in a few moments. Additional processing on the task A cause of this error appearing consistently and inconsistently is having an additional system running on the items in the task list. Some examples include: a workflow running on the task list, an event receiver running on the task list or another automated process querying and updating workflow tasks. Note: This Microsoft help article (http://office.microsoft.com/en-us/sharepointdesigner/HA102376561033.aspx#5) explains creating a workflow that runs on the task list to update a field on the task. Our experience shows that this causes the ‘Task Locked’ issues when the ‘parent’ workflow is generating more than one task at once. Isolated system error If the error is a rare event, or a ‘one off’ event, then an isolated system error may have occurred. For example, if there is a database connectivity issue while the workflow is processing the task response, the task will lock. In this case, the user will respond to a task but the workflow will not continue. When they respond again, the ‘task locked’ message will display. In this case, there will be an error in the SharePoint ULS Logs at the time that the user originally responded. Temporary delay while workflow processes If the workflow is taking a long time to process after a user submits a task, they may notice and try to respond to the task again. They will see the task locked error, but after a number of attempts (or after waiting some time) the task response page eventually indicates the task has been responded to. In this case, nothing actually went wrong, and the error message gives an accurate indication of what is happening – the workflow temporarily locked the task while it was processing. This scenario may occur in a very large workflow, or after the SharePoint application pool has just started. Modifying the task via a web service with an invalid url If the Nintex Workflow web service is used to respond to or delegate a task, the site context part of the url must be a valid alternative access mapping url. For example, if you access the web service via the IP address of the SharePoint server, and the IP address is not a valid AAM, the task can become locked. The workflow has become stuck without any apparent errors This behaviour can occur as a result of a bug in the SharePoint 2010 workflow engine.  If you do not have the August 2010 Cumulative Update (or later) for SharePoint, and your workflow uses delays, “Flexi-task”, State machine”, “Task Reminder” actions or variables, you could be affected. Check the SharePoint 2010 Updates site here: http://technet.microsoft.com/en-us/sharepoint/ff800847.  The October CU is recommended http://support.microsoft.com/kb/2553031.   The fix is described as “Consider the following scenario. You add a Delay activity to a workflow. Then, you set the duration for the Delay activity. You deploy the workflow in SharePoint Foundation 2010. In this scenario, the workflow is not resumed after the duration of the Delay activity”. If you find this is occurring in your environment, install the October CU, terminate all the running workflows affected and run them afresh. Investigative steps The first step to isolate the issue is to create a new task list on the site and configure the workflow to use it.  Any customizations that were made to the original task list should not be made to the new task list. If the new task list eliminates the issue, then the cause can be attributed to the original task list or a change that was made to it. To change the task list that the workflow uses: In Workflow Designer select Settings -> Startup Options Then configure the task list as required If any of the scenarios above do not help, check the SharePoint logs for any messages with a category of ‘Workflow Infrastructure’. Conclusion The information in this article has been gathered from observations and investigations by Nintex. The sources of these issues are the underlying SharePoint workflow engine. This article will be updated if further causes are discovered. From <http://connect.nintex.com/forums/thread/6503.aspx>

    Read the article

  • Uninstalling Reporting Server 2008 on Windows Server 2008

    - by Piotr Rodak
    Ha. I had quite disputable pleasure of installing and reinstalling and reinstalling and reinstalling – I think about 5 times before it worked – Reporting Server 2008 on Windows Server with the same year number in name. During my struggle I came across an error which seems to be not quite unfamiliar to some more unfortunate developers and admins who happen to uninstall SSRS 2008 from the server. I had the SSRS 2008 installed as named instance, SQL2008. I wanted to uninstall the server and install it to default instance. And this is when it bit me – not the first time and not the last that day . The setup complained that it couldn’t access a DLL: Error message: TITLE: Microsoft SQL Server 2008 Setup ------------------------------ The following error has occurred: Access to the path 'C:\Windows\SysWOW64\perf-ReportServer$SQL2008-rsctr.dll' is denied. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22&EvtType=0x60797DC7%25400x84E8D3C0 ------------------------------ BUTTONS: OK This is a screenshot that shows the above error: This issue seems to have a bit of literature dedicated to it and even seemingly a KB article http://support.microsoft.com/kb/956173 and a similar Connect item: http://connect.microsoft.com/SQLServer/feedback/details/363653/error-messages-when-upgrading-from-sql-2008-rc0-to-rtm The article describes issue as following: When you try to uninstall Microsoft SQL Server 2008 Reporting Services from the server, you may receive the following error message: An error has occurred: Access to the path 'Drive_Letter:\WINDOWS\system32\perf-ReportServer-rsctr.dll' is denied. Note Drive_Letter refers to the disc drive into which the SQL Server installation media is inserted. In my case, the Note was not true; the error pointed to a dll that was located in Windows folder on C:\, not where the installation media were. Despite this difference I tried to identify any processes that might be keeping lock on the dll. I downloaded Sysinternals process explorer and ran it to find any processes I could stop. Unfortunately, there was no such process. I tried to rerun the installation, but it failed at the same step. Eventually I decided to remove the dll before the setup was executed. I changed name of the dll to be able to restore it in case of some issues. Interestingly, Windows let me do it, which means that indeed, it was not locked by any process. I ran the setup and this time it uninstalled the instance without any problems:   To summarize my experience I should say – be very careful, don’t leave any leftovers after uninstallation – remove/rename any folders that are left after setup has finished. For some reason, setup doesn’t remove folders and certain files. Installation on Windows Server 2008 requires more attention than on Windows 2003 because of the changed security model, some actions can be executed only by administrator in elevated execution mode. In general, you have to get used to UAC and a bit different experience than with Windows Server 2003. Technorati Tags: SQL Server 2008,Windows Server 2008,SRS,Reporting Services

    Read the article

  • Generating a twitter OAuth access key - the semi-manual way

    - by Piet
    [UPDATE] Apparently someone at Twitter was listening, or I’m going senile/blind. Let’s call it a combination of both. Instead of following all the steps below, you could just login with the Twitter account you want to use on http://dev.twitter.com, register your application and then click ‘Edit Details’ on the application overview page at http://dev.twitter.com/apps. Next click the ‘Application detail’ button on the right, followed by the ‘My Access Token’ button in order to get your Access Token and Access Token Secret. This makes the old post below rather obsolete. Clearly a case of me thinking everything is a nail and ruby is a hammer (don’t they usually say this about java coders?) [ORIGINAL POST] OAuth is great! OAuth allows your application to use your user’s data without the need to ask for their password. So Twitter made the API much safer for their and your users. Hurray! Free pizza for everyone! Unless of course you’re using the Twitter API for your own needs like running your own bot and don’t need access to other user’s data. In such cases a simple username/password combination is more than enough. I can understand however that the Twitter guys don’t really care that much about these exceptions(?). Most such uses for the API are probably rather spammy in nature. !!! If you have a twitter app that uses the API to access external user’s data: look for another solution. This solution is ONLY meant when you ONLY need access to your own account(s) through the API. Other Solutions Mr Dallas Devries posted a solution here which involves requesting and scraping a one-time PIN. But: I like to minimize the amount of calls I make to twitter’s API or pages to lessen my chances of meeting the fail whale. Also, as soon as the pin isn’t included in a div called ‘oauth_pin’ anymore, this will fail. However, mr Devries’ post was a starting point for my solution, so I’m much obliged to him posting his findings. Authenticating with the Twitter API: old vs new Acessing The Twitter API the old way: require ‘twitter’ httpauth = Twitter::HTTPAuth.new('my_account','my_secret_password') client = Twitter::Base.new(httpauth) client.update(‘Hurray!’) The OAuth way: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Hurray!’) In the above case, ve4whatafuzzksaMQKjoI is the ‘consumer key’ (sometimes also referred to as ‘consumer token’) and KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY is the ‘consumer secret’. You’ll get these from Twitter when you register your app. 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the ‘access token’ and fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the ‘access secret’. This combination gives the registered application access to your account. I’ll show you how to obtain these by following the steps below. (Basically you’ll need a bunch of keys and you’ll have to jump a bit through hoops to obtain them for your server/bot. ) How to get these keys 1. Surf to the twitter apps registration page go to http://dev.twitter.com/apps to register your app. Login with your twitter account. 2. Register your application Enter something for Application name, Description, website,… as I said: they make you jump through hoops. If you plan on using the API to post tweets, Your application name and website will be used in the ‘5 minutes ago via…’ line below your tweet. You could use the this to point to a page with info about your bot, or maybe it’s useful for SEO purposes. For application type I choose ‘browser’ and entered http://www.hadermann.be/callback as a ‘Callback URL’. This url returns a 404 error, which is ideal because after giving our account access to our ‘application’ (step 6), it will redirect to this url with an ‘oauth_token’ and ‘oauth_verifier’ in the url. We need to get these from the url. It doesn’t really matter what you enter here though, you could leave it blank because you need to explicitely specify it when generating a request token. You probably want read&write access so set this at ‘Default Access type’. 3. Get your consumer key and consumer secret On the next page, copy/paste your ‘consumer key’ and ‘consumer secret’. You’ll need these later on. You also need these as part of the authentication in your script later on: oauth = Twitter::OAuth.new([consumer key], [consumer secret]) 4. Obtain your request token run the following in IRB to obtain your ‘request token’ Replace my fake consumer key and consumer secret with the one you obtained in step 3. And use something else instead http://www.hadermann.be/callback: although this will only give a 404, you shouldn’t trust me. irb(main):001:0> require 'oauth' irb(main):002:0> c = OAuth::Consumer.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY', {:site => 'http://twitter.com'}) irb(main):003:0> request_token = c.get_request_token(:oauth_callback => 'http://www.hadermann.be/callback') irb(main):004:0> request_token.token => "UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1" This (UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1) is the request token: Copy/paste this token, you will need this next. 5. Authorize your application surf to https://api.twitter.com/oauth/authorize?oauth_token=[the above token], for example: https://api.twitter.com/oauth/authorize?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1 This will bring you to the ‘An application would like to connect to your account’- screen on Twitter where you can grant access to the app you just registered. If you aren’t still logged in, you need to login first. Click ‘Allow’. Unless you don’t trust yourself. 6. Get your oauth_verifier from the redirected url Your browser will be redirected to your callback url, with an oauth_token and oauth_verifier parameter appended. You’ll need the oauth_verifier. In my case the browser redirected to: http://www.hadermann.be/callback?oauth_token=UrperqaukeWsWt3IAlfbxzyBUFpwWIcWkHP94QH2C1&oauth_verifier=waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag Which returned a 404, giving me the chance to copy/paste my oauth_verifier: waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag 7. Request an access token Back to irb, use the oauth_verifier to request an access token, as follows: irb(main):005:0> at = request_token.get_access_token(:oauth_verifier => 'waoOhKo8orpaqvQe6rVi5fti4ejr8hPeZrTewyeag') irb(main):006:0> at.params[:oauth_token] => "123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis" irb(main):007:0> at.params[:oauth_token_secret] => "fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh" We’re there! 123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis is the access token. fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh is the access secret. Try it! Try the following to post an update: require 'twitter' oauth = Twitter::OAuth.new('ve4whatafuzzksaMQKjoI', 'KliketyklikspQ6qYALcuNandsomemored8pQ6qYALIG7mbEQY') oauth.authorize_from_access('123-owhfmeyAgfozdyt5hDeprSevsWmPo5rVeroGfsthis', 'fGiinCdqtehMeehiddenymDeAsasaawgGeryye8amh') client = Twitter::Base.new(oauth) client.update(‘Cowabunga!’) Now you can go to your twitter page and delete the tweet if you want to.

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Dynamic connection for LINQ to SQL DataContext

    - by Steve Clements
    If for some reason you need to specify a specific connection string for a DataContext, you can of course pass the connection string when you initialise you DataContext object.  A common scenario could be a dev/test/stage/live connection string, but in my case its for either a live or archive database.   I however want the connection string to be handled by the DataContext, there are probably lots of different reasons someone would want to do this…but here are mine. I want the same connection string for all instances of DataContext, but I don’t know what it is yet! I prefer the clean code and ease of not using a constructor parameter. The refactoring of using a constructor parameter could be a nightmare.   So my approach is to create a new partial class for the DataContext and handle empty constructor in there. First from within the LINQ to SQL designer I changed the connection property to None.  This will remove the empty constructor code from the auto generated designer.cs file. Right click on the .dbml file, click View Code and a file and class is created for you! You’ll see the new class created in solutions explorer and the file will open. We are going to be playing with constructors so you need to add the inheritance from System.Data.Linq.DataContext public partial class DataClasses1DataContext : System.Data.Linq.DataContext    {    }   Add the empty constructor and I have added a property that will get my connection string, you will have whatever logic you need to decide and get the connection string you require.  In my case I will be hitting a database, but I have omitted that code. public partial class DataClasses1DataContext : System.Data.Linq.DataContext {    // Connection String Keys - stored in web.config    static string LiveConnectionStringKey = "LiveConnectionString";    static string ArchiveConnectionStringKey = "ArchiveConnectionString";      protected static string ConnectionString    {       get       {          if (DoIWantToUseTheLiveConnection) {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[LiveConnectionStringKey].ConnectionString;          }          else {             return global::System.Configuration.ConfigurationManager.ConnectionStrings[ArchiveConnectionStringKey].ConnectionString;          }       }    }      public DataClasses1DataContext() :       base(ConnectionString, mappingSource)    {       OnCreated();    } }   Now when I new up my DataContext, I can just leave the constructor empty and my partial class will decide which one i need to use. Nice, clean code that can be easily refractored and tested.   Share this post :

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Will final version of 12.04 use the power management changes found in kernel 3.3

    - by Luis Alvarado
    I have seen in some version of Ubuntu that instead of making a huge change to update to the latest kernel, they take some of the good stuff out of it, for the sake of stability and put it on a previous version. In this case, kernel 3.3 has seen some very good power management enhancements that are not all found in the kernel 3.2. My question then is: Will this updates in 3.3 be somehow pulled into the 3.2 kernel used for Ubuntu 12.04?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >