Search Results

Search found 3137 results on 126 pages for 'digital signature'.

Page 63/126 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • How to use objects as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance. After receiving an answer from Apocalisp and trying it. Thanks a lot for the answer, but there are still some issues. The proposed solution was to change the signature of the function to: def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr I changed the signature for all the functions involved: getRndExpr, getRndVal and getRndVar. And I got the same error message everywhere I call these functions and got the following error message: error: inferred type arguments [Nothing,C] do not conform to method genRndVar's type parameter bounds [E,C genRndVar(c) Since the compiler seemed to be unable to figure out the right types I changed all function call to be like below: case 0 => new c.Neg(genRndExpr[E,C](c, level+1)) After this, on the first 2 function calls (genRndVal and genRndVar) there were no compiling error, but on the following 3 calls (recursive calls to genRndExpr), where the return of the function is used to build a new Expr object I got the following error: error: type mismatch; found : C#Expr required: c.Expr case 0 = new c.Neg(genRndExpr[E,C](c, level+1)) So, again, I'm stuck. Any help will be appreciated.

    Read the article

  • CodePlex Daily Summary for Monday, February 22, 2010

    CodePlex Daily Summary for Monday, February 22, 2010New ProjectsAVDB: System to keep track of orders and the inventory of televisions, DVDs, VCRs etcBooky: Booky is an online Bookmark Management Tool. Gear Up for Lord of the Rings Online (lotro): Windows utility for checking what your LOTRO character currently has equipped and figuring out gear you should get to improve your stats.GotSharp Extensions: GotSharp Extensions is a set of helpful classes and extension methods that can make your coding experience easier and cleaner. Halfwit: A minimalist WPF Twitter client.HOA Starter Kit: A community subdivision website starter kit. First draft.Lua For Irony: Project to define the Lua language using the Irony (http://irony.codeplex.com/) development kit. This work is based heavily on the work done for V...MimeCloud: Scalable .NET Digital Asset & Media Management: MimeCloud is a scalable digital asset library & media management toolset. Founded by Alex Norcliffe and Peter Miller Written by people who have b...Parallel Mandelbrot Set solver: Solving the Mandelbrot set using the Parallel class in .NET 4.0. Showing the resulting image in a WPF application. The solution file requires VS 2010.Pomogad - Pomodoro Windows Gadget: Você usa Pomodoro Technique? Não sabe o que é? Veja aqui http://www.pomodorotechnique.com Agora que você já sabe, que tal usar essa técnica? E p...PostCrap - flyweight .NET AOP post compiler: PostCrap is a flyweight attribute based aspect injection .NET post compiler It is written in C# and uses Mono.Cecil to modify assemblies and injec...Software + Service Reference Demo Kit: MS China Developer and Platform Evangelism team created an End-2-End demo for Software + Service. Yet Another SharePoint Tool: YEAST provides you with a simple to integrate approach to generating SharePoint solution packages as part of a Visual Studio project. Zen Coding Visual Studio Plugin: Zen Coding for Visual Studio is plugin for HTML and CSS hi-speed codingNew Releases.Net MSBuild Google Closure Compiler Task: .Net MSBuild Google Closure Compiler Task 1.1: - Corrected issue with regular expression source file and renamingdotNails: dotNails_0.5.9: NOTE - the latest source code has been moved to google code to take advantage of Mercurial source control - http://code.google.com/p/dotnails/sourc...EasyWFUnit: EasyWFUnit-2.2: Release 2.2 of EasyWFUnit, an extension library to support unit testing of Windows Workflow, includes a revised WinForm GUI Test Builder that utili...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite BETA2 (for .NET 4.0RC): Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC.FolderSize: FolderSize.Win32.1.0.3.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Fusion Charts Free for SharePoint: 1.3: Fix release for issue #11833 : Feature Must Be Activated on Root of Web Application.GotSharp Extensions: 1.0: First release, containing only a few extension methods for the System.String and System.IO.Stream classes, and a Range utility class.Jeremy's Experimental Repository: FluentValidation with IoC Sample: Sample code for the blog post Using FluentValidation with an IoC containerMiniTwitter: 1.08: MiniTwitter 1.08 更新内容 修正 自動更新が CodePlex の変更で動いていなかった問題を修正 自動更新に失敗すると落ちるバグを修正 通知領域アイコン右クリックで表示されるメニューが消えないバグを修正 変更 ハッシュタグの抽出条件を変更 API のエンドポイ...MSTS Editors & Tools: Simis Editor v0.3: Simis Editor v0.3 Enabled Edit > Undo and Edit > Redo. Undoing/redoing back to last saved state is identified as saved (no prompt on exit, etc.)....Parallel Mandelbrot Set solver: Alpha 1: First releaseParallelTasks: ParallelTasks 2.0 beta1: ParallelTasks 2.0 is a total re-write of the original version. Featuring improved performance and stability and a more consistent API.Personal Expense Tracker: Personal Expense Tracker v0.1 beta: This is the first beta release. Please provide me with your feedback.PostCrap - flyweight .NET AOP post compiler: PostCrap 1.0 AOP source and binaries: PostCrap 1.0 source and binaries (the unit test project contains sample interceptor attributes for exception handling & logging)Protoforma | Tactica Adversa: Skilful 0.1.3.276: AlphaRawr: Rawr 2.3.10: - More improvements to the default filters - Further improvement on avoiding useless gem swaps from the Optimizer. - Normal/Heroic ICC items shou...Reusable Library: v1.0.2: A collection of reusable abstractions for enterprise application developer.Sem.Sync: 2010-02-21 - Synchronization Manager - Beta: This release is not tested very well, so you should use this version only to evaluate new features. - Changed way of handling source-ids in order ...Survey - web survey & form engine: Survey 1.1.0: Release Survey v. 1.1.0.0 Major changes: - layout & graphics completely overhauled - several technical changes & repairs (e.g. matrix question iss...Yet Another SharePoint Tool: Version 1: Version 1Zeta Resource Editor: Release 2010-02-21: New source code release.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpyjQuery Library for SharePoint Web ServicesSharePoint ContribInfoServicepatterns & practices – Enterprise LibraryPHPExcel

    Read the article

  • Database version control resources

    - by Wes McClure
    In the process of creating my own DB VCS tool tsqlmigrations.codeplex.com I ran into several good resources to help guide me along the way in reviewing existing offerings and in concepts that would be needed in a good DB VCS.  This is my list of helpful links that others can use to understand some of the concepts and some of the tools in existence.  In the next few posts I will try to explain how I used these to create TSqlMigrations.   Blogs entries Three rules for database work - K. Scott Allen http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx Versioning databases - the baseline http://odetocode.com/blogs/scott/archive/2008/01/31/versioning-databases-the-baseline.aspx Versioning databases - change scripts http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx Versioning databases - views, stored procedures and the like http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-views-stored-procedures-and-the-like.aspx Versioning databases - branching and merging http://odetocode.com/blogs/scott/archive/2008/02/03/versioning-databases-branching-and-merging.aspx Evolutionary Database Design - Martin Fowler http://martinfowler.com/articles/evodb.html Are database migration frameworks worth the effort? - Good challenges http://www.ridgway.co.za/archive/2009/01/03/are-database-migration-frameworks-worth-the-effort.aspx Continuous Integration (in general) http://martinfowler.com/articles/continuousIntegration.html http://martinfowler.com/articles/originalContinuousIntegration.html Is Your Database Under Version Control? http://www.codinghorror.com/blog/archives/000743.html 11 Tools for Database Versioning http://secretgeek.net/dbcontrol.asp How to do database source control and builds http://mikehadlow.blogspot.com/2006/09/how-to-do-database-source-control-and.html .Net Database Migration Tool Roundup http://flux88.com/blog/net-database-migration-tool-roundup/ Books Book Description Refactoring Databases: Evolutionary Database Design Martin Fowler signature series on refactoring databases. Book site: http://databaserefactoring.com/ Recipes for Continuous Database Integration: Evolutionary Database Development (Digital Short Cut) A good question/answer layout of common problems and solutions with database version control. http://www.informit.com/store/product.aspx?isbn=032150206X

    Read the article

  • Stream Music and Video Over the Internet with Windows Media Player 12

    - by DigitalGeekery
    A new feature in Windows Media Player 12, which is included with Windows 7, is being able to stream media over the web to other Windows 7 computers.  Today we will take a look at how to set it up and what you need to begin. Note: You will need to perform this process on each computer that you want to use. What You’ll Need Two computers running Windows 7 Home Premium, Professional, or Ultimate. The host, or home computer that you will be streaming the media from, cannot be on a public network or part of domain. Windows Live ID UPnP or Port Forwarding enabled on your home router Media files added to your Windows Media Player library Windows Live ID Sign up online for a Windows Live ID if you do not already have one. See the link below for a link to Windows Live.   Configuring the Windows 7 Computers Open Windows Media Player and go to the library section. Click on Stream and then “Allow Internet access to home media.”   The Internet Home Media Access pop up window will prompt you to link your Windows Live ID to a user account. Click “Link an online ID.” If you haven’t already installed the Windows Live ID Sign-In Assistant, you will be taken to Microsoft’s website and prompted to download it. Once you have completed the Windows Live download assistant install, you will see Windows Live ID online provider appear in the “Link Online IDs” window. Click on “Link Online ID.” Next, you’ll be prompted for a Windows Live ID and password. Enter your Windows Live ID and password and click “Sign In.” A pop up window will notify you that you have successfully allowed Internet access to home media. Now, you will have to repeat the exact same configuration on the 2nd Windows 7 computer. Once you have completed the same configuration on your 2nd computer, you might also need to configure your home router for port forwarding. If your router supports UPnP, you may not need to manually forward any ports on your router. So, this would be a good time to test your connection. Go to a nearby hotspot, or perhaps a neighbor’s house, and test to see if you can stream your media. If not, you’ll need to manually forward the ports. You can always choose to forward the ports anyway, just in case. Note: We tested on a Linksys WRT54GL router, which supports UPnP, and found we still needed to manually forward the ports. Finding the ports to forward on the router Open Windows Media Player and make sure you are in Library view. Click on “Stream” on the top menu, and select “Allow Internet access to home media.”   On the “Internet Home Media Access” window, click on “Diagnose connections.” The “Internet Streaming Diagnostic Tool” will pop up. Click on “Port forwarding information” near the bottom.   On the “Port Forwarding Information” window you will find both the Internal and External Port numbers you will need to forward on your router. The Internal port number should always be 10245. The external number will be different depending on your computer. Microsoft also recommends forwarding port 443. Configuring the Router Next, you’ll need to configure Port Forwarding on your home router. We will show you the steps for a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Administration Tab along the top, click the “Applications & Gaming Tab, and then the “Port Range Forward” tab below it. Under “Application,” type in a name. It can be any name you choose. In both the “Start” and “End” boxes, type the port number. Enter the IP address of your home computer in the IP address column. Click the check box under “Enable.” Do this for both the internal and external port numbers and port 443. When finished, click the “Save Settings” button. Note: It’s highly recommended that you configure your home computer with a static IP address When you’re ready to play your media over the Internet, open up Windows Media Player and look for your host computer and username listed under “Other Libraries.” Click on it expand the list to see your media libraries. Choose a library and a file to play. Now you can enjoy your streaming media over the Internet. Conclusion We found media streaming over the Internet to work fairly well. However, we did see a loss of quality with streaming video. Also, Recorded TV .wtv and dvr-ms files did not play at all. Check out our previous article to see how to stream media share and stream media between Windows 7 computers on your home network. Similar Articles Productive Geek Tips Enable Media Streaming in Windows Home Server to Windows Media PlayerFixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Share and Stream Digital Media Between Windows 7 Machines On Your Home NetworkLearning Windows 7: Manage Your Music with Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox)

    Read the article

  • How Mary Meeker’s Latest Findings May Make You Re-Imagine Commerce

    - by Brenna Johnson-Oracle
    0 0 1 954 5439 Endeca Technologies 45 12 6381 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Today, Mary Meeker released her highly anticipated annual “Internet Trends” presentation for 2014. All 164 slides are jam-packed with pretty much everything you need to know about the state of the Internet. And as luck would have it, Oracle is staying ahead of these trends (but we’ll talk about that later). There were a few surprises, some stats to solidify what you likely already know, and Meeker’s novel observations about where we are all going. What interested me the most is not only how people are engaging in their personal lives, but how they engage with brands. As you could probably predict, Internet usage growth is slowing while tablet user and mobile data traffic growth continue their meteoric rise around the globe, with tremendous growth in underpenetrated markets like China, India, Brazil and Indonesia. Now hold those the “Internet is dead” comments. Keep in mind there’s still plenty of room to grow, and a multiscreen model is Meeker’s vision for our future. Despite 1.5x YOY growth for mobile traffic, mobile still only makes up about 23% of all traffic today. With tablet shipments easily outpacing figures for PCs even at their height (in 2007), mobile will only continue on it’s path, but won’t be everything to everyone. Mobile won’t replace every touchpoint, it’s just created our shorter attention spans and demand for simpler, more personal experiences. As Meeker points out TVs, tablets, PCs, and smartphones are used for different activities at present, but lines will blur (for example, 84% of smartphones owners use their device while watching TV). Day-to-day activities are being re-imagining through simple, beautiful user experiences. It seems like every day I discover a new way a brand/site/app made the most mundane or mounting task enjoyable and frictionless – and I’m not alone. Meeker points out the evolution of how we do everything from how we communicate, get information, use money, meet someone, get places, order a meal, and consume media is all done through new user interfaces that make day-to-day tasks simpler. This movement has caused just about everyone’s patience for a poor UX to take a nosedive. And it’s not just the digital user experience, technology is making a lot of people’s offline lives easier, and less expensive. Today 47% of online shopping utilizes free shipping— nearly half. And Meeker predicts same day local delivery will be the “next big thing” (and you can take a guess on who will own that). Content, Community and Commerce creates the “Internet Trifecta.” Meeker pointed out that when content, communities and commerce occur in a single experience it’s embraced by consumers, which translates to big dollars for brands. The magic happens when consumers can get inspired, research, and buy in a single experience. As the buying cycle has changed and touchpoints (Web, mobile, social, store) are no longer tied to “roles” or steps in the customer journey, brands must make all experiences (content and commerce) available in a single, adaptable experience. (We at Oracle Commerce have a lot to say on this topic – stay tuned!) And in what Meeker calls the “biggest re-imagination of all:” consumers enabled with smartphones and sensors are creating troves of findable and sharable data, which she says is in the early stages, by growing rapidly. She notes that transparency and patterns of consumers with this hardware (FYI - there are up to 10 sensors embedded in smartphones now) has created a Big Data treasure chest to be mined to improve business and the life of the consumer. The opportunities are endless. So what does it all mean for a company doing business online? Start thinking about how you can: Re-imagine your experience. Not your online experience and your mobile experience and your social experience – your overall experience. When consumers can research, buy, and advocate from anywhere (and their attention spans are at an all-time low) channels don’t exist. Enable simple and beautiful interactions informed by all of the online and offline data you leverage across your enterprise. Ethically leverage the endless supply of data (user generated content, clicks, purchases, in-store behavior, social activity) to make experiences more beautiful, more accurate, and more personalized (not to mention, more lucrative for you). Re-imagine content and commerce. Content and commerce must co-exist in a single destination where shoppers can get inspired, explore, research, share, and purchase in a collective experience. Think of how you can deliver an experience where all types of experiences (brand stories and commerce) adapt to every customer need. (Look for more on this topic coming soon). Re-imagine your reach. Look to Meeker’s findings to see how the global appetite for digital experiences is growing, but under-served in many places (i.e.: India, Mexico, Indonesia, Brazil, Philippines, etc.). Growing your online business to a new geography doesn’t have to mean starting from scratch or having an entirely new team manage the new endeavor. Expand using what you’ve already built in a multisite framework, with global language support. And of course, make sure it’s optimized for mobile! Re-imagine the possible. After every Meeker report, I’m always left with the thought “we are just at the beginning.” Everyday there is more data, more possibilities, more online consumers, and more opportunities to use new latest technology to get closer to your customers and be more successful. There’s a lot going on in our Product Development and Product Innovations groups to automate innovation for our customers, so that they can continue to stay ahead of these trends, without disrupting their business. Check out a recent interview with our Innovations Team on some of these new possibilities. Staying on track despite the seemingly endless possibilities out there is the hard part. Prioritizing where you will focus based on your unique brand promise, customer and goals is what you do best. To learn how Oracle Commerce can help your business achieve your goals check out oracle.com/commerce. Check out Meeker’s entire report here.

    Read the article

  • Using Unity – Part 2

    - by nmarun
    In the first part of this series, we created a simple project and learned how to implement IoC pattern using Unity. In this one, I’ll show how you can instantiate other types that implement our IProduct interface. One place where this one would want to use this feature is to create mock types for testing purposes. Alright, let’s dig in. I added another class – Product2.cs  to the ProductModel project. 1: public class Product2 : IProduct 2: { 3: public string Name { get; set;} 4: public Category Category { get; set; } 5: public DateTime MfgDate { get;set; } 6:  7: public Product2() 8: { 9: Name = "Canon Digital Rebel XTi"; 10: Category = new Category {Name = "Electronics", SubCategoryName = "Digital Cameras"}; 11: MfgDate = DateTime.Now; 12: } 13:  14: public string WriteProductDetails() 15: { 16: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}", 17: Name, Category, MfgDate.ToShortDateString()); 18: } 19: } Highlights of this class are that it implements IProduct interface and it has some different properties than the Product class. The Category class looks like below: 1: public class Category 2: { 3: public string Name { get; set; } 4: public string SubCategoryName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0} - {1}", Name, SubCategoryName); 9: } 10: } We’ll go to our web.config file to add the configuration information about this new class – Product2 that we created. Let’s first add a typeAlias element. 1: <typeAlias alias="Product2" type="ProductModel.Product2, ProductModel"/> That’s all that is needed for us to get an instance of Product2 in our application. I have a new button added to the .aspx page and the click event of this button is where all the magic happens: 1: private IUnityContainer unityContainer; 2: protected void Page_Load(object sender, EventArgs e) 3: { 4: unityContainer = Application["UnityContainer"] as IUnityContainer; 5: 6: if (unityContainer == null) 7: { 8: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 9: } 10: else 11: { 12: if (!IsPostBack) 13: { 14: IProduct productInstance = unityContainer.Resolve<IProduct>(); 15: productDetailsLabel.Text = productInstance.WriteProductDetails(); 16: } 17: } 18: } 19:  20: protected void Product2Button_Click(object sender, EventArgs e) 21: { 22: unityContainer.RegisterType<IProduct, Product2>(); 23: IProduct product2Instance = unityContainer.Resolve<IProduct>(); 24: productDetailsLabel.Text = product2Instance.WriteProductDetails(); 25: } The unityContainer instance is set in the Page_Load event. Line 22 in the click event of the Product2Button registers a type mapping in the container. In English, this means that when unityContainer tries to resolve for IProduct, it gets an instance of Product2. Once this code runs, following output is rendered: There’s another way of doing this. You can resolve an instance of the requested type with a name from the container. We’ll have to update the container element of our web.config file to include the following: 1: <container name="unityContainer"> 2: <types> 3: <type type="IProduct" mapTo="Product"/> 4: <!-- Named mapping for IProduct to Product --> 5: <type type="IProduct" mapTo="Product" name="LegacyProduct" /> 6: <!-- Named mapping for IProduct to Product2 --> 7: <type type="IProduct" mapTo="Product2" name="NewProduct" /> 8: </types> 9: </container> I’ve added a Dropdownlist and a button to the design page: 1: <asp:DropDownList ID="ModelTypesList" runat="server"> 2: <asp:ListItem Text="Legacy Product" Value="LegacyProduct" /> 3: <asp:ListItem Text="New Product" Value="NewProduct" /> 4: </asp:DropDownList> 5: <br /> 6: <asp:Button ID="SelectedModelButton" Text="Get Selected Instance" runat="server" 7: onclick="SelectedModelButton_Click" /> 1: protected void SelectedModelButton_Click(object sender, EventArgs e) 2: { 3: // get the selected value: LegacyProduct or NewProduct 4: string modelType = ModelTypesList.SelectedValue; 5: // pass the modelType to the Resolve method 6: IProduct customModel = unityContainer.Resolve<IProduct>(modelType); 7: productDetailsLabel.Text = customModel.WriteProductDetails(); 8: } Pretty straight forward right? The only thing to note here is that the values in the dropdownlist item need to match the name attribute of the type. Depending on what you select, you’ll get an instance of either the Product class or the Product2 class and the corresponding WriteProductDetails() method is called. Now you see, how either of these methods can be used to create mock objects your the test project. See the code here. I’ll continue to share more of Unity in the next blog.

    Read the article

  • ASP.NET MVC Validation Complete

    - by Ricardo Peres
    OK, so let’s talk about validation. Most people are probably familiar with the out of the box validation attributes that MVC knows about, from the System.ComponentModel.DataAnnotations namespace, such as EnumDataTypeAttribute, RequiredAttribute, StringLengthAttribute, RangeAttribute, RegularExpressionAttribute and CompareAttribute from the System.Web.Mvc namespace. All of these validators inherit from ValidationAttribute and perform server as well as client-side validation. In order to use them, you must include the JavaScript files MicrosoftMvcValidation.js, jquery.validate.js or jquery.validate.unobtrusive.js, depending on whether you want to use Microsoft’s own library or jQuery. No significant difference exists, but jQuery is more extensible. You can also create your own attribute by inheriting from ValidationAttribute, but, if you want to have client-side behavior, you must also implement IClientValidatable (all of the out of the box validation attributes implement it) and supply your own JavaScript validation function that mimics its server-side counterpart. Of course, you must reference the JavaScript file where the declaration function is. Let’s see an example, validating even numbers. First, the validation attribute: 1: [Serializable] 2: [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] 3: public class IsEvenAttribute : ValidationAttribute, IClientValidatable 4: { 5: protected override ValidationResult IsValid(Object value, ValidationContext validationContext) 6: { 7: Int32 v = Convert.ToInt32(value); 8:  9: if (v % 2 == 0) 10: { 11: return (ValidationResult.Success); 12: } 13: else 14: { 15: return (new ValidationResult("Value is not even")); 16: } 17: } 18:  19: #region IClientValidatable Members 20:  21: public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context) 22: { 23: yield return (new ModelClientValidationRule() { ValidationType = "iseven", ErrorMessage = "Value is not even" }); 24: } 25:  26: #endregion 27: } The iseven validation function is declared like this in JavaScript, using jQuery validation: 1: jQuery.validator.addMethod('iseven', function (value, element, params) 2: { 3: return (true); 4: return ((parseInt(value) % 2) == 0); 5: }); 6:  7: jQuery.validator.unobtrusive.adapters.add('iseven', [], function (options) 8: { 9: options.rules['iseven'] = options.params; 10: options.messages['iseven'] = options.message; 11: }); Do keep in mind that this is a simple example, for example, we are not using parameters, which may be required for some more advanced scenarios. As a side note, if you implement a custom validator that also requires a JavaScript function, you’ll probably want them together. One way to achieve this is by including the JavaScript file as an embedded resource on the same assembly where the custom attribute is declared. You do this by having its Build Action set as Embedded Resource inside Visual Studio: Then you have to declare an attribute at assembly level, perhaps in the AssemblyInfo.cs file: 1: [assembly: WebResource("SomeNamespace.IsEven.js", "text/javascript")] In your views, if you want to include a JavaScript file from an embedded resource you can use this code: 1: public static class UrlExtensions 2: { 3: private static readonly MethodInfo getResourceUrlMethod = typeof(AssemblyResourceLoader).GetMethod("GetWebResourceUrlInternal", BindingFlags.NonPublic | BindingFlags.Static); 4:  5: public static IHtmlString Resource<TType>(this UrlHelper url, String resourceName) 6: { 7: return (Resource(url, typeof(TType).Assembly.FullName, resourceName)); 8: } 9:  10: public static IHtmlString Resource(this UrlHelper url, String assemblyName, String resourceName) 11: { 12: String resourceUrl = getResourceUrlMethod.Invoke(null, new Object[] { Assembly.Load(assemblyName), resourceName, false, false, null }).ToString(); 13: return (new HtmlString(resourceUrl)); 14: } 15: } And on the view: 1: <script src="<%: this.Url.Resource("SomeAssembly", "SomeNamespace.IsEven.js") %>" type="text/javascript"></script> Then there’s the CustomValidationAttribute. It allows externalizing your validation logic to another class, so you have to tell which type and method to use. The method can be static as well as instance, if it is instance, the class cannot be abstract and must have a public parameterless constructor. It can be applied to a property as well as a class. It does not, however, support client-side validation. Let’s see an example declaration: 1: [CustomValidation(typeof(ProductValidator), "OnValidateName")] 2: public String Name 3: { 4: get; 5: set; 6: } The validation method needs this signature: 1: public static ValidationResult OnValidateName(String name) 2: { 3: if ((String.IsNullOrWhiteSpace(name) == false) && (name.Length <= 50)) 4: { 5: return (ValidationResult.Success); 6: } 7: else 8: { 9: return (new ValidationResult(String.Format("The name has an invalid value: {0}", name), new String[] { "Name" })); 10: } 11: } Note that it can be either static or instance and it must return a ValidationResult-derived class. ValidationResult.Success is null, so any non-null value is considered a validation error. The single method argument must match the property type to which the attribute is attached to or the class, in case it is applied to a class: 1: [CustomValidation(typeof(ProductValidator), "OnValidateProduct")] 2: public class Product 3: { 4: } The signature must thus be: 1: public static ValidationResult OnValidateProduct(Product product) 2: { 3: } Continuing with attribute-based validation, another possibility is RemoteAttribute. This allows specifying a controller and an action method just for performing the validation of a property or set of properties. This works in a client-side AJAX way and it can be very useful. Let’s see an example, starting with the attribute declaration and proceeding to the action method implementation: 1: [Remote("Validate", "Validation")] 2: public String Username 3: { 4: get; 5: set; 6: } The controller action method must contain an argument that can be bound to the property: 1: public ActionResult Validate(String username) 2: { 3: return (this.Json(true, JsonRequestBehavior.AllowGet)); 4: } If in your result JSON object you include a string instead of the true value, it will consider it as an error, and the validation will fail. This string will be displayed as the error message, if you have included it in your view. You can also use the remote validation approach for validating your entire entity, by including all of its properties as included fields in the attribute and having an action method that receives an entity instead of a single property: 1: [Remote("Validate", "Validation", AdditionalFields = "Price")] 2: public String Name 3: { 4: get; 5: set; 6: } 7:  8: public Decimal Price 9: { 10: get; 11: set; 12: } The action method will then be: 1: public ActionResult Validate(Product product) 2: { 3: return (this.Json("Product is not valid", JsonRequestBehavior.AllowGet)); 4: } Only the property to which the attribute is applied and the additional properties referenced by the AdditionalFields will be populated in the entity instance received by the validation method. The same rule previously stated applies, if you return anything other than true, it will be used as the validation error message for the entity. The remote validation is triggered automatically, but you can also call it explicitly. In the next example, I am causing the full entity validation, see the call to serialize(): 1: function validate() 2: { 3: var form = $('form'); 4: var data = form.serialize(); 5: var url = '<%: this.Url.Action("Validation", "Validate") %>'; 6:  7: var result = $.ajax 8: ( 9: { 10: type: 'POST', 11: url: url, 12: data: data, 13: async: false 14: } 15: ).responseText; 16:  17: if (result) 18: { 19: //error 20: } 21: } Finally, by implementing IValidatableObject, you can implement your validation logic on the object itself, that is, you make it self-validatable. This will only work server-side, that is, the ModelState.IsValid property will be set to false on the controller’s action method if the validation in unsuccessful. Let’s see how to implement it: 1: public class Product : IValidatableObject 2: { 3: public String Name 4: { 5: get; 6: set; 7: } 8:  9: public Decimal Price 10: { 11: get; 12: set; 13: } 14:  15: #region IValidatableObject Members 16: 17: public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) 18: { 19: if ((String.IsNullOrWhiteSpace(this.Name) == true) || (this.Name.Length > 50)) 20: { 21: yield return (new ValidationResult(String.Format("The name has an invalid value: {0}", this.Name), new String[] { "Name" })); 22: } 23: 24: if ((this.Price <= 0) || (this.Price > 100)) 25: { 26: yield return (new ValidationResult(String.Format("The price has an invalid value: {0}", this.Price), new String[] { "Price" })); 27: } 28: } 29: 30: #endregion 31: } The errors returned will be matched against the model properties through the MemberNames property of the ValidationResult class and will be displayed in their proper labels, if present on the view. On the controller action method you can check for model validity by looking at ModelState.IsValid and you can get actual error messages and related properties by examining all of the entries in the ModelState dictionary: 1: Dictionary<String, String> errors = new Dictionary<String, String>(); 2:  3: foreach (KeyValuePair<String, ModelState> keyValue in this.ModelState) 4: { 5: String key = keyValue.Key; 6: ModelState modelState = keyValue.Value; 7:  8: foreach (ModelError error in modelState.Errors) 9: { 10: errors[key] = error.ErrorMessage; 11: } 12: } And these are the ways to perform date validation in ASP.NET MVC. Don’t forget to use them!

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • What is Polymorphism?

    - by SAMIR BHOGAYTA
    * Polymorphism is one of the primary characteristics (concept) of object-oriented programming. * Poly means many and morph means form. Thus, polymorphism refers to being able to use many forms of a type without regard to the details. * Polymorphism is the characteristic of being able to assign a different meaning specifically, to allow an entity such as a variable, a function, or an object to have more than one form. * Polymorphism is the ability to process objects differently depending on their data types. * Polymorphism is the ability to redefine methods for derived classes. Types of Polymorphism * Compile time Polymorphism * Run time Polymorphism Compile time Polymorphism * Compile time Polymorphism also known as method overloading * Method overloading means having two or more methods with the same name but with different signatures Example of Compile time polymorphism public class Calculations { public int add(int x, int y) { return x+y; } public int add(int x, int y, int z) { return x+y+z; } } Run time Polymorphism * Run time Polymorphism also known as method overriding * Method overriding means having two or more methods with the same name , same signature but with different implementation Example of Run time Polymorphism class Circle { public int radius = 0; public double getArea() { return 3.14 * radius * radius } } class Sphere { public double getArea() { return 4 * 3.14 * radius * radius } }

    Read the article

  • Node remains in commissioning status

    - by Vinitha
    I have been trying to set up ubuntu cloud 12.04. I'm kind of new to MAAS and ubuntu. Here is what I followed. Have installed MAAS server using the steps provided in https://wiki.ubuntu.com/ServerTeam/MAAS For the node, I installed the Ubuntu 12.04 Server Image on a USB Stick. Then restarted the node and opted to enlist the node via boot media, with PXE. once the process was done, the node was powered off as expected. I manually powered on the node, as my node is not PXE enabled. Result - No node was visible on MAAS UI Since step 2 didn't work, I added the node via maas-cli. command. After the execution of this command I got the node reflected on to my MAAS UI. But the status continues to be in "Commissioning" for a long time. Then I executed "maas-cli maas nodes check-commissioning " and i got "Unrecognised signature: POST check_commissioning". I'm not sure where is the error. Could some one please help me solve this issue. I checked the following log file but found no error related to commissioning (pserv.log / maas.log / celery.log/celery-region.log). I found this entry in my auth.log "Nov 16 18:20:34 ubuntuCloud sshd[4222]: Did not receive identification string from xxx.xx.xx.x" not sure if it indicates anything as the ip that is mentioned is not of the node nor of the MAAS server. I also verified the time on the server and node using date cmd - (at one instance the times are : Server: Fri Nov 16 18:15:51 IST 2012 and Node Fri Nov 16 18:15:43 IST 2012). Not sure if 'date' the right cmd to set the time. I have also check maas_local_settings.py for the MAAS url. I'm not sure what are the logs that need to be verified. Is there any log that can be checked on the Node. Thanks Vinitha

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • A Look Back at 2010 Predictions

    - by David Dorf
    Now is the time of year people make their predictions for next year, but before I start thinking about 2011 it's worth a look back to see how my predictions for 2010 fared. 1. Borders and Blockbuster bite the dust. I would have never predicted a strong brand such as Circuit City could die, but now I know it can happen to anyone. Borders has lost the battle with Barnes & Noble and Blockbuster has lost to Netflix. And just to be sure, Amazon put an extra nail in each coffin. Borders received additional investment from Bennett LeBow to keep it afloat, but the stock is down around $1.25 with no profits in sight. Blockbuster filed for bankruptcy back in September. 2. Every retailer finally has a page on Facebook... but very few figure out how to keep fans engaged. Retailer postings become noise, and fans start to unsubscribe. Twitter goes in the same direction. A few standout retailers will figure out how to use social media, and the rest will remain dumbfounded. Most retailers are on the Facebook bandwagon, and their fan bases seem to be increasing thanks to promotions like The Gap's logo redesign, Lowes' black Friday sneak peak, and Walmart's Crowd Savers. There are several examples of f-commerce advancements, including some interesting integrations from Amazon.3. Smartphones consolidate and grow. More and more people will step-up to smartphones, most of which will choose iPhone, Blackberry, and Android phones. Other smartphones will vanish, and networks will start to strain. But retailers will finally embrace mobile as the next big channel. Retail marketing departments will build mobile apps without the help of their IT department, and eventually they will get into a bind. Android has been on a tear lately stealing market share from Blackberry. Palm and Microsoft are trending down, and Apple is holding steady. Smartphone sales are up 15% and expected to continue. Retailers understand the importance of mobile, and some innovative applications have been produced this year. 4. Google helps the little guys. Google will push its Favorite Places project to help give exposure to small retailers and restaurants. They will enable small retailers to act like big ones by providing storefronts, detailed product information, and coupons for consumers. Google will find a way to bring augmented reality to the masses. I can't say I've seen much new from Google regarding Favorite Places, but they've continued to push local product search. From the PC or smartphone, consumers can search for products and see which nearby stores have it stock. Oracle Retail even productized an integration to Google to support this effort. I suppose if Google ever buys Groupon then it will bring them even closer to local shopping. Google talked about augmented humanity, but that has nothing to do with augmented reality. 5. Steve Jobs Is Bugs Bunny and Steve Ballmer is Elmer Fudd. (OK, I stole that headline from an InformationWeek article. I couldn't resist.) Both Apple and Microsoft will continue to open new stores, but only Apple will show real growth. POSReady 2009 (formerly WEPOS) will continue to share the POS market with Linux. The iPhone and iPod will continue to capture market share, but there won't be an Apple tablet. There won't be an Apple tablet? What was I thinking? While Apple has well over 300 stores, there are less than 10 Microsoft stores. Initial impressions show that even though Microsoft is locating its store near Apple Stores, they are not converting customers, with shoppers citing a lack of assortment and high prices. 6. Consolidation of e-commerce software providers. Software vendors in the areas of search, reviews, online call-centers, payments, and e-commerce will consolidate, partly driven by the success of m-commerce and SaaS. Amazon will find someone else to buy, and eBay will continue to lose momentum. Consolidation of e-commerce providers continued with IBM acquiring Sterling Commerce and CoreMetrics, and Oracle recently announcing the acquisition of ATG. Amazon grabbed Zappos, Woot, and Diapers.com to continue its dominance of online selling. While eBay's Marketplace growth may have slowed, its PayPal division is doing quite well, fueled in part by demand for mobile payments. 7. Book publishers mirror music labels. Just as the iPod brought digital downloads to the masses, the Kindle and Nook will power the e-book revolution. Books will continue to use DRM for a few more years before following the path of music. Publishers will try to preserve the margins of hardbacks by associating e-book releases with paperbacks. Amazon has done a good job providing e-reader clients for smartphones, PCs, and tablets. Competition from Barnes & Noble has forced Amazon to support book loaning, and both companies are making it easier for people to publish ebooks (with or without DRM). Progress is slow but steady. 8. NFC makes inroads, RFID treads water. Near Field Communications start to appear in mobile phones, and retailers beta test its use for payments and loyalty programs. RFID tag costs come down a bit, but not enough to spur accelerated adoption.Nokia announced plans to offer NFC-enabled phones in 2011, and rumors are swirling about NFC in the upcoming iPhone.  I think NFC is heading in the right direction, and I've heard more interest from retailers about specialized uses for RFID.9. Digital Signage goes the way of augmented reality. People use their camera phones to leave geo-tagged notes all over cities, rating stores and restaurants, and "painting" graffiti. But people get tired of holding their phones in front of their faces, so AR glasses are offered in much the same way bluetooth headsets emerged. Retailers experiement with in-store advertising using AR. Several retailers like Pizza Hut, Benetton, and Target have experimented with AR but its still somewhat of a gimmick used by marketing.  I think this prediction is a year or two too early. 10. JDA flip-flops again. After announcing their embracing of the .Net architecture, then switching to J2EE after the Manugistics acquisition, JDA will finally decide to standardize on Apple's Objective C. Everything will be ported to the iPhone and be available on the AppStore. After all, there's not much left to try. This was, of course, a joke but the sentiment is still valid.  JDA seems more supply-chain focused than retail focused, which is a an outcrop if their i2 acquisition.  Of the 10 predictions, I'm going to say I got 6 somewhat correct.  (Don't you just love grading your own paper?)  Soon I'll post my predictions for 2011 so be on the lookout.  Until then here's one more prediction:  Va Tech beats Stanford in the Orange Bowl -- count on it!

    Read the article

  • Testing Routes in ASP.NET MVC with MvcContrib

    - by Guilherme Cardoso
    I've decide to write about unit testing in the next weeks. If we decide to develop with Test-Driven Developement pattern, it's important to not forget the routes. This article shows how to test routes. I'm importing my routes from my RegisterRoutes method from the Global.asax of Project.Web created by default (in SetUp). I'm using ShouldMapTp() from MvcContrib: http://mvccontrib.codeplex.com/ The controller is specified in the ShouldMapTo() signature, and we use lambda expressions for the action and parameters that are passed to that controller. [SetUp] public void Setup() { Project.Web.MvcApplication.RegisterRoutes(RouteTable.Routes); } [Test] public void Should_Route_HomeController() { "~/Home" .ShouldMapTo<HomeController>(action => action.Index()); } [Test] public void Should_Route_EventsController() { "~/Events" .ShouldMapTo<EventsController>(action => action.Index()); "~/Events/View/44/Concert-DevaMatri-22-January-" .ShouldMapTo<EventosController>(action => action.Read(1, "Title")); // In this example,44 is the Id for my Event and "Concert-DevaMatri-22-January" is the title for that Event } [TearDown] public void teardown() { RouteTable.Routes.Clear(); }

    Read the article

  • Examining ASP.NET's Membership, Roles, and Profile - Part 18

    Membership, in a nutshell, is a framework build into the .NET Framework that supports creating, authenticating, deleting, and modifying user account information. Each user account has a set of core properties: username, password, email, a security question and answer, whether or not the account has been approved, whether or not the user is locked out of the system, and so on. These user-specific properties are certainly helpful, but they're hardly exhaustive - it's not uncommon for an application to need to track additional user-specific properties. For example, an online messageboard site might want to also also associate a signature, homepage URL, and IM address with each user account. There are two ways to associate additional information with user accounts when using the Membership model. The first - which affords the greatest flexibility, but requires the most upfront effort - is to create a custom data store for this information. If you are using the SqlMembershipProvider, this would mean creating an additional database table that had as a primary key the UserId value from the aspnet_Users table and columns for each of the additional user properties. The second option is to use the Profile system, which allows additional user-specific properties to be defined in a configuration file. (See Part 6 for an in-depth look at the Profile system.) This article explores how to store additional user information in a separate database table. We'll see how to allow a signed in user to update these additional user-specific properties and how to create a page to display information about a selected user. What's more, we'll look at using ASP.NET Routing to display user information using an SEO-friendly, human-readable URL like www.yoursite.com/Users/username. Read on to learn more! Read More >

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Why does integrity check fail for the 12.04.1 Alternate ISO?

    - by mghg
    I have followed various recommendations from the Ubuntu Documentation to create a bootable Ubuntu USB flash drive using the 12.04.1 Alternate install ISO-file for 64-bit PC. But the integrity test of the USB stick has failed and I do not see why. These are the steps I have made: Download of the 12.04.1 Alternate install ISO-file for 64-bit PC (ubuntu-12.04.1-alternate-amd64.iso) from http://releases.ubuntu.com/12.04.1/, as well as the MD5, SHA-1 and SHA-256 hash files and related PGP signatures Verification of the data integrity of the ISO-file using the MD5, SHA-1 and SHA-256 hash files, after having verified the hash files using the related PGP signature files (see e.g. https://help.ubuntu.com/community/HowToSHA256SUM and https://help.ubuntu.com/community/VerifyIsoHowto) Creation of a bootable USB stick using Ubuntu's Startup Disk Creator program (see http://www.ubuntu.com/download/help/create-a-usb-stick-on-ubuntu) Boot of my computer using the newly made 12.04.1 Alternate install on USB stick Selection of the option "Check disc for defects" (see https://help.ubuntu.com/community/Installation/CDIntegrityCheck) Steps 1, 2, 3 and 4 went without any problem or error messages. However, step 5 ended with an error message entitled "Integrity test failed" and with the following content: The ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default file failed the MD5 checksum verification. Your CD-ROM or this file may have been corrupted. I have experienced the same (might only be similar since I have no exact notes) error message in previous attempts using the 12.04 (i.e. not the maintenance release) Alternate install ISO-file. I have in these cases tried to install anyway and have so far not experienced any problems to my knowledge. Is failed integrity check described above a serious error? What is the solution? Or can it be ignored without further problems?

    Read the article

  • Screen space to world space

    - by user13414
    I am writing a 2D game where my game world has x axis running left to right, y axis running top to bottom, and z axis out of the screen: Whilst my game world is top-down, the game is rendered on a slight tilt: I'm working on projecting from world space to screen space, and vice-versa. I have the former working as follows: var viewport = new Viewport(0, 0, this.ScreenWidth, this.ScreenHeight); var screenPoint = viewport.Project(worldPoint.NegateY(), this.ProjectionMatrix, this.ViewMatrix, this.WorldMatrix); The NegateY() extension method does exactly what it sounds like, since XNA's y axis runs bottom to top instead of top to bottom. The screenshot above shows this all working. Basically, I have a bunch of points in 3D space that I then render in screen space. I can modify camera properties in real time and see it animate to the new position. Obviously my actual game will use sprites rather than points and the camera position will be fixed, but I'm just trying to get all the math in place before getting to that. Now, I am trying to convert back the other way. That is, given an x and y point in screen space above, determine the corresponding point in world space. So if I point the cursor at, say, the bottom-left of the green trapezoid, I want to get a world space reading of (0, 480). The z coordinate is irrelevant. Or, rather, the z coordinate will always be zero when mapping back to world space. Essentially, I want to implement this method signature: public Vector2 ScreenPointToWorld(Vector2 point) I've tried several things to get this working but am just having no luck. My latest thinking is that I need to call Viewport.Unproject twice with differing near/far z values, calculate the resultant Ray, normalize it, then calculate the intersection of the Ray with a Plane that basically represents ground-level of my world. However, I got stuck on the last step and wasn't sure whether I was over-complicating things. Can anyone point me in the right direction on how to achieve this?

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • Installing 12.04 within 11.04

    - by user288752
    I recently installed 11.04 from an installation disk (overwriting Windows in the process). I know 11.04 is no longer supported but I had no problems subsequently upgrading it to 12.04 (via 11.10) a couple of months ago on another device. This time though, things are different. I can't upgrade through update manager because Ubuntu then tells me I have no internet connection, which is (obviously incorrect). I have tried to circumvent the problem by downloading the 12:04 iso from ubuntu.com directly but now I'm troubled by something else. The download is succesfull but after mounting the iso I can't interact with it. When I try to access the Wubi it gives me the following message: Archive: /home/lars/.cache/.fr-7g75Fe/wubi.exe [/home/lars/.cache/.fr-7g75Fe/wubi.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/lars/.cache/.fr-7g75Fe/wubi.exe or /home/lars/.cache/.fr-7g75Fe/wubi.exe.zip, and cannot find /home/lars/.cache/.fr-7g75Fe/wubi.exe.ZIP, period. What am I doing wrong here?

    Read the article

  • How can I keep the cpu temp low?

    - by Newton
    I have an HP pavilion dv7, I'm using ubuntu 12.04 so the overheating problem with sandybridge cpu is a lot better. However my laptop is still becoming too hot to keep on my legs. The problem is that the fan wait too much before starting, so the medium temp is too hight. When I'm using windows 7 the laptop is room-temperature cold, I've absolutely no problem. On windows the fan is always spinning very low & very silently so the heat is continuously removed, without reaching an unconfortable temp. How can I force the computer to act like that also on ubuntu? PS The bios can't let me control this kind of thing, and this is my experience with lm-sensors and fancontrol al@notebook:~$ sudo sensors-detect [sudo] password for al: # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Hewlett-Packard HP Pavilion dv7 Notebook PC (laptop) # Board: Hewlett-Packard 1800 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8518 Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel Cougar Point (PCH) Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Next adapter: i915 gmbus disabled (i2c-0) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus ssc (i2c-1) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOB (i2c-2) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus vga (i2c-3) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOA (i2c-4) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus panel (i2c-5) Do you want to scan it? (YES/no/selectively): y Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: i915 GPIOC (i2c-6) Do you want to scan it? (YES/no/selectively): y Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: i915 gmbus dpc (i2c-7) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOD (i2c-8) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus dpb (i2c-9) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOE (i2c-10) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus reserved (i2c-11) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus dpd (i2c-12) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOF (i2c-13) Do you want to scan it? (YES/no/selectively): y Next adapter: DPDDC-B (i2c-14) Do you want to scan it? (YES/no/selectively): y Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK al@notebook:~$ sudo /etc/init.d/module-init-tools restart Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service module-init-tools restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop module-init-tools ; start module-init-tools. The restart(8) utility is also available. module-init-tools stop/waiting al@notebook:~$ sudo service module-init-tools restart stop: Unknown instance: module-init-tools stop/waiting al@notebook:~$ sudo service module-init-tools start module-init-tools stop/waiting al@notebook:~$ sudo pwmconfig # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Is my case too desperate?

    Read the article

  • In Case You Weren’t There: Blogwell NYC

    - by Mike Stiles
    0 0 1 1009 5755 Vitrue 47 13 6751 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman";} Your roving reporter roved out to another one of Socialmedia.org’s fantastic Blogwell events, this time in NYC. As Central Park and incredible weather beckoned, some of the biggest brand names in the world gathered to talk about how they’re incorporating social into marketing and CRM, as well as extending social across their entire organizations internally. Below we present a collection of the live tweets from many of the key sessions GE @generalelectricJon Lombardo, Leader of Social Media COE How GE builds and extends emotional connections with consumers around health and reaps the benefits of increased brand equity in the process. GE has a social platform around Healthyimagination to create better health for people. If you and a friend are trying to get healthy together, you’ll do better. Health is inherently. Get health challenges via Facebook and share with friends to achieve goals together. They’re creating an emotional connection around the health context. You don’t influence people at large. Your sphere of real influence is around 5-10 people. They find relevant conversations about health on Twitter and engage sounding like a friend, not a brand. Why would people share on behalf of a brand? Because you tapped into an activity and emotion they’re already having. To create better habits in health, GE gave away inexpensive, relevant gifts related to their goals. Create the context, give the relevant gift, get social acknowledgment for giving it. What you get when you get acknowledgment for your engagement and gift is user generated microcontent. GE got 12,000 unique users engaged and 1400 organic posts with the healthy gift campaign. The Dow Chemical Company @DowChemicalAbby Klanecky, Director of Digital & Social Media Learn how Dow Chemical is finding, training, and empowering their scientists to be their storytellers in social media. There are 1m jobs coming open in science. Only 200k are qualified for them. Dow Chemical wanted to use social to attract and talk to scientists. Dow Chemical decided to use real scientists as their storytellers. Scientists are incredibly passionate, the key ingredient of a great storyteller. Step 1 was getting scientists to focus on a few platforms, blog, Twitter, LinkedIn. Dow Chemical social flow is Core Digital Team - #CMs – ambassadors – advocates. The scientists were trained in social etiquette via practice scenarios. It’s not just about sales. It’s about growing influence and the business. Dow Chemical trained about 100 scientists, 55 are active and there’s a waiting list for the next sessions. In person social training produced faster results and better participation. Sometimes you have to tell pieces of the story instead of selling your execs on the whole vision. Social Media Ethics Briefing: Staying Out of TroubleAndy Sernovitz, CEO @SocialMediaOrg How do we get people to share our message for us? We have to have their trust. The difference between being honest and being sleazy is disclosure. Disclosure does not hurt the effectiveness of your marketing. No one will get mad if you tell them up front you’re a paid spokesperson for a company. It’s a legal requirement by the FTC, it’s the law, to disclose if you’re being paid for an endorsement. Require disclosure and truthfulness in all your social media outreach. Don’t lie to people. Monitor the conversation and correct misstatements. Create social media policies and training programs. If you want to stay safe, never pay cash for social media. Money changes everything. As soon as you pay, it’s not social media, it’s advertising. Disclosure, to the feds, means clear, conspicuous, and understandable to the average reader. This phrase will keep you in the clear, “I work for ___ and this is my personal opinion.” Who are you? Were you paid? Are you giving an honest opinion based on a real experience? You as a brand are responsible for what an agency or employee or contactor does in your behalf. SocialMedia.org makes available a Disclosure Best Practices Toolkit. Socialmedia.org/disclosure. The point is to not ethically mess up and taint social media as happened to e-mail. Not only is the FTC cracking down, so is Google and Facebook. Visa @VisaNewsLucas Mast, Senior Business Leader, Global Corporate Social Media Visa built a mobile studio for the Olympics for execs and athletes. They wanted to do postcard style real time coverage of Visa’s Olympics sponsorships, and on a shoestring. Challenges included Olympic rules, difficulty getting interviews, time zone trouble, and resourcing. Another problem was they got bogged down with their own internal approval processes. Despite all the restrictions, they created and published a variety of and fair amount of content. They amassed 1000+ views of videos posted to the Visa Communication YouTube channel. Less corporate content yields more interest from media outlets and bloggers. They did real world video demos of how their products work in the field vs. an exec doing a demo in a studio. Don’t make exec interview videos dull and corporate. Keep answers short, shoot it in an interesting place, do takes until they’re comfortable and natural. Not everything will work. Not everything will get a retweet. But like the lottery, you can’t win if you don’t play. Promoting content is as important as creating it. McGraw-Hill Companies @McGrawHillCosPatrick Durando, Senior Director of Global New Media McGraw-Hill has 26,000 employees. McGraw-Hill created a social intranet called Buzz. Intranets create operational efficiency, help product dev, facilitate crowdsourcing, and breaks down geo silos. Intranets help with talent development, acquisition, retention. They replaced the corporate directory with their own version of LinkedIn. The company intranet has really cut down on the use of email. Long email threats become organized, permanent social discussions. The intranet is particularly useful in HR for researching and getting answers surrounding benefits and policies. Using a profile on your company intranet can establish and promote your internal professional brand. If you’re going to make an intranet, it has to look great, work great, and employees are going have to want to go there. You can’t order them to like it. 

    Read the article

  • Naming methods that do the same thing but return different types

    - by Konstantin Ð.
    Let's assume that I'm extending a graphical file chooser class (JFileChooser). This class has methods which display the file chooser dialog and return a status signature in the form of an int: APPROVE_OPTION if the user selects a file and hits Open /Save, CANCEL_OPTION if the user hits Cancel, and ERROR_OPTION if something goes wrong. These methods are called showDialog(). I find this cumbersome, so I decide to make another method that returns a File object: in the case of APPROVE_OPTION, it returns the file selected by the user; otherwise, it returns null. This is where I run into a problem: would it be okay for me to keep the showDialog() name, even though methods with that name — and a different return type — already exist? To top it off, my method takes an additional parameter: a File which denotes in which directory the file chooser should start. My question to you: Is it okay to call a method the same name as a superclass method if they return different types? Or would that be confusing to API users? (If so, what other name could I use?) Alternatively, should I keep the name and change the return type so it matches that of the other methods? public int showDialog(Component parent, String approveButtonText) // Superclass method public File showDialog(Component parent, File location) // My method

    Read the article

  • Dell Studio 1737 Overheating

    - by Sean
    I am using a Dell Studio 1737 laptop. I have been running Linux and have ran Windows recently for a very long time. I upgraded to the 10.10 distribution and since that distro, it seems that for some reason all Linuxes want to push my laptop to extremes. I have recently upgraded to Ubuntu 12.04 since I heart that it contains kernel fixes for overheating issues. 12.04 will actually eventually cool the system, but that is after the fans run to the point it sounds like a jet aircraft taking off and the laptop makes my hands sweat. In trying to combat the heat problems I have done the following: I installed the propriatery driver for my ATI Mobility HD 3600. I have tried both the one in the Additional Drivers and also tried ATI's latest greatest version. If I don't install this my laptop will overheat and shut off in minutes. Both seem to perform similarly, but the heat problem remains. I have tried limiting the CPU by installing the CPUFreq Indicator. This does help keep the machine from shutting off, but the heat is still uncomfortable to be around the machine. I usually run in power saver mode or run the cpu at 1.6 GHZ just to error on safety. I ran sensors-detect and here are the results: sean@sean-Studio-1737:~$ sudo sensors-detect # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Dell Inc. Studio 1737 (laptop) # Board: Dell Inc. 0F237N This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found `ITE IT8512E/F/G Super IO' (but not activated) Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel ICH9 Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK sean@sean-Studio-1737:~$ sudo service module-init-tools start module-init-tools stop/waiting I also tried installing i8k but that didn't work since it didn't seem to be able to communicate with the hardware (probably for different kind of device). Also I ran acpi -V and here are the results: Battery 0: Full, 100% Battery 0: design capacity 613 mAh, last full capacity 260 mAh = 42% Adapter 0: on-line Thermal 0: ok, 49.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: ok, 48.0 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 2: ok, 51.0 degrees C Thermal 2: trip point 0 switches to mode critical at temperature 100.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 I have hit a wall and don't know what to do now. Any advice is appreciated.

    Read the article

  • Trying to login to openssh, permission denied

    - by noah sisk
    I have been trying to login to ssh on a ubuntu 11.04 server as root with the AllowRootLogin thing set to yes but i have been getting a "Permision denied" Heres a copy of my attempt with ssh -v: Last login: Fri Jun 8 21:07:20 on ttys000 noah-sisks-macbook-pro:~ phreshness$ ssh -v [email protected] -p 22 OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to 192.168.1.133 [192.168.1.133] port 22. debug1: Connection established. debug1: identity file /Users/phreshness/.ssh/id_rsa type -1 debug1: identity file /Users/phreshness/.ssh/id_rsa-cert type -1 debug1: identity file /Users/phreshness/.ssh/id_dsa type -1 debug1: identity file /Users/phreshness/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.1.133' is known and matches the RSA host key. debug1: Found key in /Users/phreshness/.ssh/known_hosts:6 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/phreshness/.ssh/id_rsa debug1: Trying private key: /Users/phreshness/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentications that can continue: publickey,password Permission denied, please try again. [email protected]'s password:

    Read the article

  • The Mystery of the Vanishing Disk Space

    - by Oddthinking
    My disk space is dwindling by about 2GB a day! I only have a few more days before I run out of space. $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 143G 126G 11G 93% / udev 491M 4.0K 491M 1% /dev tmpfs 200M 696K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 499M 144K 499M 1% /run/shm /dev/sda2 1.9G 580M 1.2G 33% /tmp /dev/sda1 92M 29M 58M 33% /boot I have been searching for the biggest directories/log files, deleting and compressing. But I am still losing the war. Finally, I realised I have a big misunderstanding: julian@server1:~$ sudo du -h / | tail -n 1 16G / All of my files in / only add up to 16 GB. That leaves 110 GB unaccounted for! Clearly I have a misunderstanding: I thought the '/dev/sda4' line represented all the files visible from '/'. What should I be reading to understand where the other storage has gone? More details: I have an Ubuntu 11.10 server, that was set-up by data-center staff. It is running my own code (which is fairly prolific with log files, but otherwise doesn't store much stuff on the drive) duplicity for backups (which tends to store a lot of signature files) various other standard services, like Apache, nagios, etc. They are very lightly used. It has been up for about 4 months without a reboot. I lied about the du output (simplified it for effect). It also complained about not being able to access GVFS and the du processes's own resources. I believe they are irrelevant: . du: cannot access `/home/julian/.gvfs': Permission denied du: cannot access `/proc/10841/task/10841/fd/4': No such file or directory du: cannot access `/proc/10841/task/10841/fdinfo/4': No such file or directory du: cannot access `/proc/10841/fd/4': No such file or directory du: cannot access `/proc/10841/fdinfo/4': No such file or directory

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >