Search Results

Search found 556 results on 23 pages for 'newton falls'.

Page 20/23 | < Previous Page | 16 17 18 19 20 21 22 23  | Next Page >

  • Force SSRS 2008 to use SSRS 2005 CSV rendering

    - by Kash
    We are upgrading our report server from SSRS 2005 to SSRS 2008 R2. I have an issue with CSV export rendering for SSRS 2008 where the SUM of columns are appearing on the right side of the detail values in 2008 instead of the left side like in 2005 as shown in the below blocks. 117 and 131 are the sums of Column2 and Column3 respectively. SSRS 2005 CSV Output Column2_1,Column3_1,Column2,Column3 117,131,1,2 117,131,1,2 117,131,60,23 117,131,30,15 117,131,25,89 SSRS 2008 CSV Output Column2,Column3,Column2_1,Column3_1 1,2,117,131 1,2,117,131 60,23,117,131 30,15,117,131 25,89,117,131 I understand that the CSV renderer has gone through major changes in SSRS 2008 R2 with the support for charts and gauges and more importantly it provides 2 modes: the default Excel mode and Compliant mode. But neither mode helps fix this issue. The Compliant mode was supposed to be closest to that of 2005 but apparently it is not close enough for my case. My Question: Is there a way to force SSRS 2008 fall back a report to a backward compatibility mode so that it exports into a 2005 CSV format? Solution tried: a) Using 2005-based CRIs Based on this article on ExecutionLog2, if SSRS 2008 R2 encounters a report whose auto-upgrade is not possible (e.g. reports that were built with 2005-based CustomReportItem controls), those particular reports will be processed with the old Yukon engine in a "transparent backwards-compatibility mode". It seems like it falls back to its previous version mode (2005) and attempts to render it. So I tried using a 2005-based barcode CustomReportItem and deployed to a SSRS 2008 R2 report server, but it shows the same result as before though it suppressed the barcode. This would be because SSRS 2008 R2 finds a way to suppress part of the report output and displays the rest. It would be great to find a 2005-based CRI that makes SSRS 2008 R2 process it with its old Yukon engine. Please note that quite possibly, even if it uses the "old Yukon processing engine", it might still use the new CSV renderer hence it shows the same output. If that is true, then this option is moot. b) Using XML renderer We can use a custom XML renderer and then use XSLT to convert the xml to appropriate CSV but this would mean that we need to convert all our 200 reports. Hence this is not feasible. Please note that we do not have the option of having SSRS 2005 and SSRS 2008 R2 deployed side by side.

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • Resgen al.exe generated resources do not work within .net library

    - by Raj G
    Hi, I am currently working on a library in .Net and I planned to make the strings that are used within the library into culture specific resource files. I made Resources.resx, Resources.en-US.resx and Resources.ja-JP.resx file. I also deleted the Resources.designer.cs file autogenerated by visual studio 2008. I am loading Resources through my custom ResourceManager object [using GetString method]. The problem that I am facing is that when I compile the library within visual studio and set the culture from the calling application, everything is working fine. But if I manually go to the directory and change a string for a culture and regenerate the satellite assembly with resgen and al.exe, the string displayed, falls back to the invariant culture. I have attached the ildasm view of both the dlls en-US generated from within visual studio //Metadata version: v2.0.50727 .assembly extern mscorlib { .publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4.. .hash = (71 05 4D 54 C4 8D C2 90 7D 8B CF 57 2E B5 98 22 // q.MT....}..W..." F5 5B 2E 06 ) // .[.. .ver 2:0:0:0 } .assembly EmailEngine.resources { .custom instance void [mscorlib]System.Reflection.AssemblyTitleAttribute::.ctor(string) = ( 01 00 0B 45 6D 61 69 6C 45 6E 67 69 6E 65 00 00 ) // ...EmailEngine.. .custom instance void [mscorlib]System.Reflection.AssemblyDescriptionAttribute::.ctor(string) = ( 01 00 FF 00 00 ) .custom instance void [mscorlib]System.Reflection.AssemblyCompanyAttribute::.ctor(string) = ( 01 00 FF 00 00 ) .custom instance void [mscorlib]System.Reflection.AssemblyProductAttribute::.ctor(string) = ( 01 00 0B 45 6D 61 69 6C 45 6E 67 69 6E 65 00 00 ) // ...EmailEngine.. .custom instance void [mscorlib]System.Reflection.AssemblyCopyrightAttribute::.ctor(string) = ( 01 00 12 43 6F 70 79 72 69 67 68 74 20 C2 A9 20 // ...Copyright .. 20 32 30 30 38 00 00 ) // 2008.. .custom instance void [mscorlib]System.Reflection.AssemblyTrademarkAttribute::.ctor(string) = ( 01 00 FF 00 00 ) .custom instance void [mscorlib]System.Reflection.AssemblyFileVersionAttribute::.ctor(string) = ( 01 00 07 31 2E 30 2E 30 2E 30 00 00 ) // ...1.0.0.0.. .hash algorithm 0x00008004 .ver 1:0:0:0 .locale = (65 00 6E 00 2D 00 55 00 53 00 00 00 ) // e.n.-.U.S... } .mresource public 'EmailEngine.Properties.Resources.en-US.resources' { // Offset: 0x00000000 Length: 0x00000111 } .module EmailEngine.resources.dll // MVID: {D030D620-4E59-46F4-94F4-5EA0F9554E67} .imagebase 0x00400000 .file alignment 0x00000200 .stackreserve 0x00100000 .subsystem 0x0003 // WINDOWS_CUI .corflags 0x00000001 // ILONLY // Image base: 0x008B0000 ja-JP generated by me using resgen and al.exe // Metadata version: v2.0.50727 .assembly EmailEngine.resources { .hash algorithm 0x00008004 .ver 0:0:0:0 .locale = (6A 00 61 00 00 00 ) // j.a... } .mresource public 'EmailEngine.Properties.Resources.ja-JP.resources' { // Offset: 0x00000000 Length: 0x0000012F } .module EmailEngine.resources.dll // MVID: {0F470BCD-C36D-4B9F-A8ED-205A0E5A9F6F} .imagebase 0x00400000 .file alignment 0x00000200 .stackreserve 0x00100000 .subsystem 0x0003 // WINDOWS_CUI .corflags 0x00000001 // ILONLY // Image base: 0x007F0000 Can anyone help me as to why these two files are different and what is going on here? Why would the same Japanese resource file work when generated from within visual studio and not when generated using tools. TIA Raj

    Read the article

  • Reading a POP3 server with only TcpClient and StreamWriter/StreamReader[SOLVED]

    - by WebDevHobo
    I'm trying to read mails from my live.com account, via the POP3 protocol. I've found the the server is pop3.live.com and the port if 995. I'm not planning on using a pre-made library, I'm using NetworkStream and StreamReader/StreamWriter for the job. I need to figure this out. So, any of the answers given here: http://stackoverflow.com/questions/44383/reading-email-using-pop3-in-c are not usefull. It's part of a larger program, but I made a small test to see if it works. Eitherway, i'm not getting anything. Here's the code I'm using, which I think should be correct. EDIT: this code is old, please refer to the second block problem solved. public Program() { string temp = ""; using(TcpClient tc = new TcpClient(new IPEndPoint(IPAddress.Parse("127.0.0.1"),8000))) { tc.Connect("pop3.live.com",995); using(NetworkStream nws = tc.GetStream()) { using(StreamReader sr = new StreamReader(nws)) { using(StreamWriter sw = new StreamWriter(nws)) { sw.WriteLine("USER " + user); sw.Flush(); sw.WriteLine("PASS " + pass); sw.Flush(); sw.WriteLine("LIST"); sw.Flush(); while(temp != ".") { temp += sr.ReadLine(); } } } } } Console.WriteLine(temp); } Visual Studio debugger constantly falls over tc.Connect("pop3.live.com",995); Which throws an "A socket operation was attempted to an unreachable network 65.55.172.253:995" error. So, I'm sending from port 8000 on my machine to port 995, the hotmail pop3 port. And I'm getting nothing, and I'm out of ideas. Second block: Problem was apparently that I didn't write the quit command. The Code: public Program() { string str = string.Empty; string strTemp = string.Empty; using(TcpClient tc = new TcpClient()) { tc.Connect("pop3.live.com",995); using(SslStream sl = new SslStream(tc.GetStream())) { sl.AuthenticateAsClient("pop3.live.com"); using(StreamReader sr = new StreamReader(sl)) { using(StreamWriter sw = new StreamWriter(sl)) { sw.WriteLine("USER " + user); sw.Flush(); sw.WriteLine("PASS " + pass); sw.Flush(); sw.WriteLine("LIST"); sw.Flush(); sw.WriteLine("QUIT "); sw.Flush(); while((strTemp = sr.ReadLine()) != null) { if(strTemp == "." || strTemp.IndexOf("-ERR") != -1) { break; } str += strTemp; } } } } } Console.WriteLine(str); }

    Read the article

  • Cannot pull correct data from a Javascript array into an HTML form

    - by Isaac
    I am trying to return the description value of the corresponding author name and book title(that are typed in the text boxes). The problem is that the first description displays in the text area no matter what. <h1>Bookland</h1> <div id="bookinfo"> Author name: <input type="text" id="authorname" name="authorname"></input><br /> Book Title: <input type="text" id="booktitle" name="booktitle"></input><br /> <input type="button" value="Find book" id="find"></input> <input type="button" value="Clear Info" id="clear"></input><br /> <textarea rows="15" cols="30" id="destin"></textarea> </div> JavaScript: var bookarray = [{Author: "Thomas Mann", Title: "Death in Venice", Description: "One of the most famous literary works of the twentieth century, this novella embodies" + "themes that preoccupied Thomas Mann in much of his work:" + "the duality of art and life, the presence of death and disintegration in the midst of existence," + "the connection between love and suffering and the conflict between the artist and his inner self." }, {Author: "James Joyce", Title: "A portrait of the artist as a young man", Description: "This work displays an unusually perceptive view of British society in the early 20th century." + "It is a social comedy set in Florence, Italy, and Surrey, England." + "Its heroine, Lucy Honeychurch, struggling against straitlaced Victorian attitudes of arrogance, narroe mindedness and sobbery, falls in love - while on holiday in Italy - with the socially unsuitable George Emerson." }, {Author: "E. M. Forster", Title: "A room with a view", Description: "This book is a fictional re-creation of the Irish writer'sown life and early environment." + "The experiences of the novel's young hero,unfold in astonishingly vivid scenes that seem freshly recalled from life" + "and provide a powerful portrait of the coming of age of a young man ofunusual intelligence, sensitivity and character. " }, {Author: "Isabel Allende", Title: "The house of spirits", Description: "Allende describes the life of three generations of a prominent family in Chile and skillfully combines with this all the main historical events of the time, up until Pinochet's dictatorship." }, {Author: "Isabel Allende", Title: "Of love and shadows", Description: "The whole world of Irene Beltran, a young reporter in Chile at the time of the dictatorship, is destroyed when" + "she discovers a series of killings carried out by government soldiers." + "With the help of a photographer, Francisco Leal, and risking her life, she tries to come up with evidence against the dictatorship." }] function searchbook(){ for(i=0; i &lt; bookarray.length; i++){ if ((document.getElementById("authorname").value &amp; document.getElementById("booktitle").value ) == (bookarray[i].Author &amp; bookarray[i].Title)){ document.getElementById("destin").value =bookarray[i].Description return bookarray[i].Description } else { return "Not Found!" } } } document.getElementById("find").addEventListener("click", searchbook, false)

    Read the article

  • XNA - Keyboard text input

    - by Sekhat
    Okay, so basically I want to be able to retrieve keyboard text. Like entering text into a text field or something. I'm only writing my game for windows. I've disregarded using Guide.BeginShowKeyboardInput because it breaks the feel of a self contained game, and the fact that the Guide always shows XBOX buttons doesn't seem right to me either. Yes it's the easiest way, but I don't like it. Next I tried using System.Windows.Forms.NativeWindow. I created a class that inherited from it, and passed it the Games window handle, implemented the WndProc function to catch WM_CHAR (or WM_KEYDOWN) though the WndProc got called for other messages, WM_CHAR and WM_KEYDOWN never did. So I had to abandon that idea, and besides, I was also referencing the whole of Windows forms, which meant unnecessary memory footprint bloat. So my last idea was to create a Thread level, low level keyboard hook. This has been the most successful so far. I get WM_KEYDOWN message, (not tried WM_CHAR yet) translate the virtual keycode with Win32 funcation MapVirtualKey to a char. And I get my text! (I'm just printing with Debug.Write at the moment) A couple problems though. It's as if I have caps lock on, and an unresponsive shift key. (Of course it's not however, it's just that there is only one Virtual Key Code per key, so translating it only has one output) and it adds overhead as it attaches itself to the Windows Hook List and isn't as fast as I'd like it to be, but the slowness could be more due to Debug.Write. Has anyone else approached this and solved it, without having to resort to an on screen keyboard? or does anyone have further ideas for me to try? thanks in advance. note: This is cross posted from the XNA Creators Forums, so if I get an answer there I'll post it here and Vice-Versa Question asked by Jimmy Maybe I'm not understanding the question, but why can't you use the XNA Keyboard and KeyboardState classes? My comment: It's because though you can read keystates, you can't get access to typed text as and how it is typed by the user. So let me further clarify. I want to implement being able to read text input from the user as if they are typing into textbox is windows. The keyboard and KeyboardState class get states of all keys, but I'd have to map each key and combination to it's character representation. This falls over when the user doesn't use the same keyboard language as I do especially with symbols (my double quotes is shift + 2, while american keyboards have theirs somewhere near the return key).

    Read the article

  • Nested Sortable JQuery list doesn't work in IE while it does in FF

    - by Ben Fransen
    Hello everybody, While I'm using this site quite often as a resource for jQuery related problems I can't seem to find an answer this time. So here is my first post. During daytime at work I'm developing an informationsystem for generating MS Word documents. One of the modules I'm developing is for defining a default chapterselection for the table of contents and selecting texts by the given chapters. A user can submit new chapters and if necessary, add them as childchapter to a parent, flag them as 'adopt in TOC' and link a default text (from another module) to the chapter. My chapterlist is retreived recursively from a MySQL table and could look something like this: <ul class="sortableChapterlist"> <li>1</li> <li>2 <ul class="sortableChapterlist"> <li>2.1</li> <li>2.2</li> <li>2.3 <ul class="sortableChapterlist"> <li>2.3.1</li> <li>2.3.2</li> </ul> </li> </ul> </li> </ul> In FireFox the code works like a charm but IE (7) doesn't seem to like it that much. I'm only able to correctly drag arround the mainchapters. When attempting to drag a subchapter, no matter it's level, the correspondending mainchapter lifts up with some child, never all. This is the jQuery code I'm using to accomplish the task: $(function(){ $(".sortableChapterlist").sortable({ opacity: 0.7, helper: 'clone', cursor: 'move' }); $(".sortableChapterlist").selectable(); $(".sortableChapterlist").disableSelection(); }); Does anybody have some ideas about this? I'm guessing IE kinda falls over the multiple class reference "chapter_list" in combination with jQuery trying to handle the draggable/sortable. Kinds regards from Holland, Ben Fransen

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • What am I missing about WCF?

    - by Bigtoe
    I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release. However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML. It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site. I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism. I suppose what I'm asking is, Am I looking at WCF the wrong way? What strengths does it have over the alternatives? Under what circumstances should I choose to use WCF? OK Folks, Sorry about the delay in responding, work does have a nasty habbit of get in the way somethimes :) Some clarifications My main paint point with WCF I suppose falls down into the following areas While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden Size of string than can be passed can't be over 8K Number of objects that can be passed in a single message is restricted Proxies not automatically recovering from failures The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it. Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembley, but it's full of rubbish and certain code generators choke on it. I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading. I just think there should be simplier configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rahter than just the web.config/app.config files). OK, back to the daily grid. Thanks for all your replies so far. Kind Regards Noel

    Read the article

  • Non RBAC User Roles and Permissions System: a role with properties

    - by micha12
    We are currently designing a User Roles and Permissions System in our web application (ASP.NET), and it seems that we have several cases that do no fit within the classical Role-Based Access Control (RBAC). I will post several questions, each devoted to a particular case. This is my second question (the first question is here: http://stackoverflow.com/questions/2839797/non-rbac-user-roles-and-permissions-system-checking-the-users-city). We have the following case: we need to implement a Manager role in our web application. However, a Manager can belong to one or several companies (within a big group of companies for which we are creating this web app). Say, there can be “Manager of companies A and B”, “Manager of company C”, etc. Depending on the companies that the Manager belongs, he has access to certain operations: for example, he can communicate with clients only of those companies that he belongs to. That is, “Manager of companies A and B” can only have contacts with clients of companies A and B, and not with those of company C. He can also view clients’ details pages of companies A and B and not of C, etc. It seems that this case falls within the RBAC. However, this is not really the case. We will need to create a ManagerRole class that will have a Companies property – that is, this will not be just a role as a collection of permissions (like in the classical RBAC), but a role with properties! This was just one example of a role having properties. There will be others: for example, an Administrator role that will also belong to a number of companies and will also have other custom properties. This means that we will a hierarchy or roles classes: class Role – base class class ManagerRole : Role List Companies class AdministratorRole : Role List Companies Other properties We investigated pure RBAC and its implementation in several systems, and found no systems featuring a hierarchy or roles, each having custom properties. In RBAC, roles are just collections of permissions. We could model our cases using permission with properties, like ManagerPermission, AdministratorPermission, but this has a lot of drawbacks, the main being that we will not be able to assign a role like “Manager of Companies A and B” to a user directly, but will have to create a role containing a ManagerPermission for companies A and B… Moreover, a "Manager" seems to be rather a "role" (position in the company) rather than a "permission" from the linguistic point of view. Would be grateful for any ideas on this subject, as well as any experience in this field! Thank you.

    Read the article

  • ASP.NET Client to Server communication

    - by Nelson
    Can you help me make sense of all the different ways to communicate from browser to client in ASP.NET? I have made this a community wiki so feel free to edit my post to improve it. Specifically, I'm trying to understand in which scenario to use each one by listing how each works. I'm a little fuzzy on UpdatePanel vs CallBack (with ViewState): I know UpdatePanel always returns HTML while CallBack can return JSON. Any other major differences? ...and CallBack (without ViewState) vs WebMethod. CallBack goes through most of the Page lifecycle, WebMethod doesn't. Any other major differences? IHttpHandler Custom handler for anything (page, image, etc.) Only does what you tell it to do (light server processing) Page is an implementation of IHttpHandler If you don't need what Page provides, create a custom IHttpHandler If you are using Page but overriding Render() and not generating HTML, you probably can do it with a custom IHttpHandler (e.g. writing binary data such as images) By default can use the .axd or .ashx file extensions -- both are functionally similar .ashx doesn't have any built-in endpoints, so it's preferred by convention Regular PostBack (System.Web.UI.Page : IHttpHandler) Inherits Page Full PostBack, including ViewState and HTML control values (heavy traffic) Full Page lifecycle (heavy server processing) No JavaScript required Webpage flickers/scrolls since everything is reloaded in browser Returns full page HTML (heavy traffic) UpdatePanel (Control) Control inside Page Full PostBack, including ViewState and HTML control values (heavy traffic) Full Page lifecycle (heavy server processing) Controls outside the UpdatePanel do Render(NullTextWriter) Must use ScriptManager If no client-side JavaScript, it can fall back to regular PostBack with no JavaScript (?) No flicker/scroll since it's an async call, unless it falls back to regular postback. Can be used with master pages and user controls Has built-in support for progress bar Returns HTML for controls inside UpdatePanel (medium traffic) Client CallBack (Page, System.Web.UI.ICallbackEventHandler) Inherits Page Most of Page lifecycle (heavy server processing) Takes only data you specify (light traffic) and optionally ViewState (?) (medium traffic) Client must support JavaScript and use ScriptManager No flicker/scroll since it's an async call Can be used with master pages and user controls Returns only data you specify in format you specify (e.g. JSON, XML...) (?) (light traffic) WebMethod Class implements System.Web.Service.WebService HttpContext available through this.Context Takes only data you specify (light traffic) Server only runs the called method (light server processing) Client must support JavaScript No flicker/scroll since it's an async call Can be used with master pages and user controls Returns only data you specify, typically JSON (light traffic) Can create instance of server control to render HTML and sent back as string, but events, paging in GridView, etc. won't work PageMethods Essentially a WebMethod contained in the Page class, so most of WebMethod's bullet's apply Method must be public static, therefore no Page instance accessible HttpContext available through HttpContext.Current Accessed directly by URL Page.aspx/MethodName (e.g. with XMLHttpRequest directly or with library such as jQuery) Setting ScriptManager property EnablePageMethods="True" generates a JavaScript proxy for each WebMethod Cannot be used directly in user controls with master pages and user controls Any others?

    Read the article

  • Binding CoreData Managed Object to NSTextFieldCell subclass

    - by ndg
    I have an NSTableView which has its first column set to contain a custom NSTextFieldCell. My custom NSTextFieldCell needs to allow the user to edit a "desc" property within my Managed Object but to also display an "info" string that it contains (which is not editable). To achieve this, I followed this tutorial. In a nutshell, the tutorial suggests editing your Managed Objects generated subclass to create and pass a dictionary of its contents to your NSTableColumn via bindings. This works well for read-only NSCell implementations, but I'm looking to subclass NSTextFieldCell to allow the user to edit the "desc" property of my Managed Object. To do this, I followed one of the articles comments, which suggests subclassing NSFormatter to explicitly state which Managed Object property you would like the NSTextFieldCell to edit. Here's the suggested implementation: @implementation TRTableDescFormatter - (BOOL)getObjectValue:(id *)anObject forString:(NSString *)string errorDescription:(NSString **)error { if (anObject != nil){ *anObject = [NSDictionary dictionaryWithObject:string forKey:@"desc"]; return YES; } return NO; } - (NSString *)stringForObjectValue:(id)anObject { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; return [anObject valueForKey:@"desc"]; } - (NSAttributedString*)attributedStringForObjectValue:(id)anObject withDefaultAttributes:(NSDictionary *)attrs { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; NSAttributedString *anAttributedString = [[NSAttributedString alloc] initWithString: [anObject valueForKey:@"desc"]]; return anAttributedString; } @end I assign the NSFormatter subclass to my cell in my NSTextFieldCell subclass, like so: - (void)awakeFromNib { TRTableDescFormatter *formatter = [[[TRTableDescFormatter alloc] init] autorelease]; [self setFormatter:formatter]; } This seems to work, but falls down when editing multiple rows. The behaviour I'm seeing is that editing a row will work as expected until you try to edit another row. Upon editing another row, all previously edited rows will have their "desc" value set to the value of the currently selected row. I've been doing a lot of reading on this subject and would really like to get to the bottom of this. What's more frustrating is that my NSTextFieldCell is rendering exactly how I would like it to. This editing issue is my last obstacle! If anyone can help, that would be greatly appreciated.

    Read the article

  • Is this scenario in compliance with GPLv3?

    - by Sean Kinsey
    For arguments sake, say that we create a web application , that depends on a GPLv3 licensed component, lets say Ext JS. Based on Section 0 of the license, the common notion is that the entire web application (the client side javascript) falls under the definition of a covered work: A “covered work” means either the unmodified Program or a work based on the Program. and that it will therefor have to be distributed under the same license Ok, so here comes the fun part: This is a short 'program' that is based on Ext JS var myPanel = new Ext.Panel(); The question that arises is: Have I now violated the GPL by not including the source of Ext JS and its license? Ok, so lets take another example <!doctype html> <html> <head> <title>my title</title> <script type="text/javascript" src="http://extjs.cachefly.net/ext-3.2.1/ext-all.js"> </script> <link rel="stylesheet" type="text/css" href="http://extjs.cachefly.net/ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript"> var myPanel = new Ext.Panel(); </script> </head> <body> </body> </html> Have I now violated the terms of the GPL? The code conveyed by me to you is in a non-functional state - it will have to be combined with the actual source of Ext JS, which you(your browser) will have to retrieve, from a source made public by someone else to be usable. Now, if the answer to the above is no, how does me conveying this code in visible form differ from the 'invisible' form conveyed by my web server? As a side note, a very similar thing is done in Linux with many projects that depends on less permissive licenses - the user has to retrieve these on its own and make these available for the primary lib/executable. How is this not the same if the user is informed on beforehand that he (the browser) will have to retrieve the needed resources from a different source? Just to make it clear, I'm pro FLOSS, and I have also published a number of projects licensed under more permissive licenses. The reason I'm asking this is that I still haven't found anyone offering a definitive answer to this.

    Read the article

  • Create Duplicate Records on SELECT for Calendar Date Range

    - by peterallcdn
    Hey all, I've built a pretty shnazzy calendar system but there is one tweak that I need to make so that I'm completely happy with it. My calendar has three tables: calevents - The calendared event. caldates - The occurrences and date-range of each occurrence for each event. calcats - The categories that can be applied to an event. The short: For each calevent, there can be many caldates, one for each occurrence of calevent. So a calevent that repeats weekly and spans 3 days might have caldates like this: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-23 3 37 2010-06-28 2010-06-30 7 37 2010-07-05 2010-07-07 9 37 2010-07-12 2010-07-14 What I want to do, is when selecting all the caldates for a specified month such as 2010-06, to return not just the two records above, but instead a record for each date in the range of date_start and date_end for each caldate. So if I searched for 2010-06, I would get: date_id date_eid date_start date_end date_day 2 37 2010-06-21 2010-06-23 2010-06-21 2 37 2010-06-21 2010-06-23 2010-06-22 2 37 2010-06-21 2010-06-23 2010-06-23 3 37 2010-06-28 2010-06-30 2010-06-28 3 37 2010-06-28 2010-06-30 2010-06-29 3 37 2010-06-28 2010-06-30 2010-06-30 The Long: The reason I want to do this, is so when displaying a list of events(calevents) for a specified month, an occurrence(caldates) of that event will be displayed for EACH of the days it spans. I could do this with php by looping through each day of the current month and displaying a copy of each caldate if the month day falls between date_start and date_end. But doing it this way will prevent me from using record pagination if needed. For example, if for a specified month the following caldates were returned: date_id date_eid date_start date_end 2 37 2010-06-21 2010-06-27 94 53 2010-06-09 2010-07-08 Doing record pagination would see this as only 2 records("rows"). But looping through them with PHP would generate 29 "rows". So, I figure if I use mysql to create each row instead of PHP, I can achieve the same thing AND still be able to use pagination if a month has a lot of events/dates. As far as performance goes, I'm not sure which option is more efficient. Both would send the same amount of info to the browser, so it's really only the work required to generate the info that matters. My current query which fetches all the occurrences for a specified month, and to make things just a little more complicated... joins them with their event and category, looks like this: $sql_to_execute = " SELECT date_id, date_eid, date_start, date_end, event_id, event_title, event_category, event_private, event_location, SUBSTRING_INDEX(event_detailsstripped, ' ', 40) AS event_detailsstripped, event_time, event_starttime, event_endtime, event_active, cat_colour FROM ( caldates LEFT JOIN calevents ON caldates.date_eid = calevents.event_id ) LEFT JOIN calcats ON calevents.event_category = calcats.cat_id WHERE date_start <= '".mysql_real_escape_string($dbi_list_end_date)."' AND date_end >= '".mysql_real_escape_string($dbi_list_start_date)."' ".$dbi_category." ORDER BY date_start ASC "; Any help or advice would be greatly appreciated! Thanks, Peter

    Read the article

  • HttpPost request unsuccessful

    - by The Thom
    I have written a web service and am now writing a tester to perform integration testing from the outside. I am writing my tester using apache httpclient 4.3. Based on the code here: http://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html and here: http://hc.apache.org/httpcomponents-client-4.3.x/tutorial/html/fundamentals.html#d5e186 I have written the following code. Map<String, String> parms = new HashMap<>(); parms.put(AirController.KEY_VALUE, json); postUrl(SERVLET, parms); ... protected String postUrl(String servletName, Map<String, String> parms) throws AirException{ String url = rootUrl + servletName; HttpPost post = new HttpPost(url); List<NameValuePair> nvps = new ArrayList<NameValuePair>(); for(Map.Entry<String, String> entry:parms.entrySet()){ BasicNameValuePair parm = new BasicNameValuePair(entry.getKey(), entry.getValue()); nvps.add(parm); } try { post.setEntity(new UrlEncodedFormEntity(nvps)); } catch (UnsupportedEncodingException use) { String msg = "Invalid parameters:" + parms; throw new AirException(msg, use); } CloseableHttpResponse response; try { response = httpclient.execute(post); } catch (IOException ioe) { throw new AirException(ioe); } String result; if(HttpStatus.SC_OK == response.getStatusLine().getStatusCode()){ result = processResponse(response); } else{ String msg = MessageFormat.format("Invalid status code {0} received from query {1}.", new Object[]{response.getStatusLine().getStatusCode(), url}); throw new AirException(msg); } return result; } This code successfully reaches my servlet. In my servlet, I have (using Spring's AbstractController): protected ModelAndView post(HttpServletRequest request, HttpServletResponse response) { String json = String.valueOf(request.getParameter(KEY_VALUE)); if(json.equals("null")){ log.info("Received null value."); response.setStatus(406); return null; } And this code always falls into the null parameter code and returns a 406. I'm sure I'm missing something simple, but can't see what it is.

    Read the article

  • Using IOperationBehavior to supply a WCF parameter

    - by Chris Kemp
    This is my first step into the world of stackoverflow, so apologies if I cock anything up. I'm trying to create a WCF Operation which has a parameter that is not exposed to the outside world, but is instead automatically passed into the function. So the world sees this: int Add(int a, int b) But it is implemented as: int Add(object context, int a, int b) Then, the context gets supplied by the system at run-time. The example I'm working with is completely artificial, but mimics something that I'm looking into in a real-world scenario. I'm able to get close, but not quite the whole way there. First off, I created a simple method and wrote an application to confirm it works. It does. It returns a + b and writes the context as a string to my debug. Yay. [OperationContract] int Add(object context, int a, int b); I then wrote the following code: public class SupplyContextAttribute : Attribute, IOperationBehavior { public void Validate(OperationDescription operationDescription) { if (!operationDescription.Messages.Any(m => m.Body.Parts.First().Name == "context")) throw new FaultException("Parameter 'context' is missing."); } public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation) { dispatchOperation.Invoker = new SupplyContextInvoker(dispatchOperation.Invoker); } public void ApplyClientBehavior(OperationDescription operationDescription, ClientOperation clientOperation) { } public void AddBindingParameters(OperationDescription operationDescription, BindingParameterCollection bindingParameters) { // Remove the 'context' parameter from the inbound message operationDescription.Messages[0].Body.Parts.RemoveAt(0); } } public class SupplyContextInvoker : IOperationInvoker { readonly IOperationInvoker _invoker; public SupplyContextInvoker(IOperationInvoker invoker) { _invoker = invoker; } public object[] AllocateInputs() { return _invoker.AllocateInputs().Skip(1).ToArray(); } private object[] IntroduceContext(object[] inputs) { return new[] { "MyContext" }.Concat(inputs).ToArray(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { return _invoker.Invoke(instance, IntroduceContext(inputs), out outputs); } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { return _invoker.InvokeBegin(instance, IntroduceContext(inputs), callback, state); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { return _invoker.InvokeEnd(instance, out outputs, result); } public bool IsSynchronous { get { return _invoker.IsSynchronous; } } } And my WCF operation now looks like this: [OperationContract, SupplyContext] int Amend(object context, int a, int b); My updated references no longer show the 'context' parameter, which is exactly what I want. The trouble is that whenver I run the code, it gets past the AllocateInputs and then throws an Index was outside the bounds of the Array. error somewhere in the WCF guts. I've tried other things, and I find that I can successfully change the type of the parameter and rename it and have my code work. But the moment I remove the parameter it falls over. Can anyone give me some idea of how to get this to work (or if it can be done at all).

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • Identifying if a user is in the local administrators group

    - by Adam Driscoll
    My Problem I'm using PInvoked Windows API functions to verify if a user is part of the local administrators group. I'm utilizing GetCurrentProcess, OpenProcessToken, GetTokenInformationand LookupAccountSid to verify if the user is a local admin. GetTokenInformation returns a TOKEN_GROUPS struct with an array of SID_AND_ATTRIBUTES structs. I iterate over the collection and compare the user names returned by LookupAccountSid. My problem is that, locally (or more generally on our in-house domain), this works as expected. The builtin\Administrators is located within the group membership of the current process token and my method returns true. On another domain of another developer the function returns false. The LookupAccountSid functions properly for the first 2 iterations of the TOKEN_GROUPS struct, returning None and Everyone, and then craps out complaining that "A Parameter is incorrect." What would cause only two groups to work correctly? The TOKEN_GROUPS struct indicates that there are 14 groups. I'm assuming it's the SID that is invalid. Everything that I have PInvoked I have taken from an example on the PInvoke website. The only difference is that with the LookupAccountSid I have changed the Sid parameter from a byte[] to a IntPtr because SID_AND_ATTRIBUTESis also defined with an IntPtr. Is this ok since LookupAccountSid is defined with a PSID? LookupAccountSid PInvoke [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern bool LookupAccountSid( string lpSystemName, IntPtr Sid, StringBuilder lpName, ref uint cchName, StringBuilder ReferencedDomainName, ref uint cchReferencedDomainName, out SID_NAME_USE peUse); Where the code falls over for (int i = 0; i < usize; i++) { accountCount = 0; domainCount = 0; //Get Sizes LookupAccountSid(null, tokenGroups.Groups[i].SID, null, ref accountCount, null, ref domainCount, out snu); accountName2.EnsureCapacity((int) accountCount); domainName.EnsureCapacity((int) domainCount); if (!LookupAccountSid(null, tokenGroups.Groups[i].SID, accountName2, ref accountCount, domainName, ref domainCount, out snu)) { //Finds its way here after 2 iterations //But only in a different developers domain var error = Marshal.GetLastWin32Error(); _log.InfoFormat("Failed to look up SID's account name. {0}", new Win32Exception(error).Message); continue; } If more code is needed let me know. Any help would be greatly appreciated.

    Read the article

  • Mac OS X 10.8 VPN Server: Bypass VPN for LAN traffic (routing LAN traffic to secondary connection)

    - by Dan Robson
    I have somewhat of an odd setup for a VPN server with OS X Mountain Lion. It's essentially being used as a bridge to bypass my company's firewall to our extranet connection - certain things our team needs to do require unfettered access to the outside, and changing IT policies to allow traffic through the main firewall is just not an option. The extranet connection is provided through a Wireless-N router (let's call it Wi-Fi X). My Mac Mini server is configured with the connection to this router as the primary connection, thus unfettered access to the internet via the router. Connections to this device on the immediate subnet are possible through the LAN port, but outside the subnet things are less reliable. I was able to configure the VPN server to provide IP addresses to clients in the 192.168.11.150-192.168.11.200 range using both PPTP and L2TP, and I'm able to connect to the extranet through the VPN using the standard Mac OS X VPN client in System Preferences, however unsurprisingly, a local address (let's call it internal.company.com) returns nothing. I tried to bypass the limitation of the VPN Server by setting up Routes in the VPN settings. Our company uses 13.x.x.x for all internal traffic, instead of 10.x.x.x, so the routing table looked something like this: IP Address ---------- Subnet Mask ---------- Configuration 0.0.0.0 248.0.0.0 Private 8.0.0.0 252.0.0.0 Private 12.0.0.0 255.0.0.0 Private 13.0.0.0 255.0.0.0 Public 14.0.0.0 254.0.0.0 Private 16.0.0.0 240.0.0.0 Private 32.0.0.0 224.0.0.0 Private 64.0.0.0 192.0.0.0 Private 128.0.0.0 128.0.0.0 Private I was under the impression that if nothing was entered here, all traffic was routed through the VPN. With something entered, only traffic specifically marked to go through the VPN would go through the VPN, and all other traffic would be up to the client to access using its own default connection. This is why I had to specifically mark every subnet except 13.x.x.x as Private. My suspicion is that since I can't reach the VPN server from outside the local subnet, it's not making a connection to the main DNS server and thus can't be reached on the larger network. I'm thinking that entering hostnames like internal.company.com aren't kicked back to the client to resolve, because the server has no idea that the IP address falls in the public range, since I suspect (probably should ping test it but don't have access to it right now) that it can't reach the DNS server to find out anything about that hostname. It seems to me that all my options for resolving this all boil down to the same type of solution: Figure out how to reach the DNS with the secondary connection on the server. I'm thinking that if I'm able to do [something] to get my server to recognize that it should also check my local gateway (let's say Server IP == 13.100.100.50 and Gateway IP == 13.100.100.1). From there Gateway IP can tell me to go find DNS Server at 13.1.1.1 and give me information about my internal network. I'm very confused about this path -- really not sure if I'm even making sense. I thought about trying to do this client side, but that doesn't make sense either, since that would add time to each and every client side setup. Plus, it just seems more logical to solve it on the server - I could either get rid of my routing table altogether or keep it - I think the only difference would be that internal traffic would also go through the server - probably an unnecessary burden on it. Any help out there? Or am I in over my head? Forward proxy or transparent proxy is also an option for me, although I have no idea how to set either of those up. (I know, Google is my friend.)

    Read the article

  • Validation in Silverlight

    - by Timmy Kokke
    Getting started with the basics Validation in Silverlight can get very complex pretty easy. The DataGrid control is the only control that does data validation automatically, but often you want to validate your own entry form. Values a user may enter in this form can be restricted by the customer and have to fit an exact fit to a list of requirements or you just want to prevent problems when saving the data to the database. Showing a message to the user when a value is entered is pretty straight forward as I’ll show you in the following example.     This (default) Silverlight textbox is data-bound to a simple data class. It has to be bound in “Two-way” mode to be sure the source value is updated when the target value changes. The INotifyPropertyChanged interface must be implemented by the data class to get the notification system to work. When the property changes a simple check is performed and when it doesn’t match some criteria an ValidationException is thrown. The ValidatesOnExceptions binding attribute is set to True to tell the textbox it should handle the thrown ValidationException. Let’s have a look at some code now. The xaml should contain something like below. The most important part is inside the binding. In this case the Text property is bound to the “Name” property in TwoWay mode. It is also told to validate on exceptions. This property is false by default.   <StackPanel Orientation="Horizontal"> <TextBox Width="150" x:Name="Name" Text="{Binding Path=Name, Mode=TwoWay, ValidatesOnExceptions=True}"/> <TextBlock Text="Name"/> </StackPanel>   The data class in this first example is a very simplified person class with only one property: string Name. The INotifyPropertyChanged interface is implemented and the PropertyChanged event is fired when the Name property changes. When the property changes a check is performed to see if the new string is null or empty. If this is the case a ValidationException is thrown explaining that the entered value is invalid.   public class PersonData:INotifyPropertyChanged { private string _name; public string Name { get { return _name; } set { if (_name != value) { if(string.IsNullOrEmpty(value)) throw new ValidationException("Name is required"); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } } public event PropertyChangedEventHandler PropertyChanged=delegate { }; } The last thing that has to be done is letting binding an instance of the PersonData class to the DataContext of the control. This is done in the code behind file. public partial class Demo1 : UserControl { public Demo1() { InitializeComponent(); this.DataContext = new PersonData() {Name = "Johnny Walker"}; } }   Error Summary In many cases you would have more than one entry control. A summary of errors would be nice in such case. With a few changes to the xaml an error summary, like below, can be added.           First, add a namespace to the xaml so the control can be used. Add the following line to the header of the .xaml file. xmlns:Controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.Input"   Next, add the control to the layout. To get the result as in the image showed earlier, add the control right above the StackPanel from the first example. It’s got a small margin to separate it from the textbox a little.   <Controls:ValidationSummary Margin="8"/>   The ValidationSummary control has to be notified that an ValidationException occurred. This can be done with a small change to the xaml too. Add the NotifyOnValidationError to the binding expression. By default this value is set to false, so nothing would be notified. Set the property to true to get it to work.   <TextBox Width="150" x:Name="Name" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True}"/>   Data annotation Validating data in the setter is one option, but not my personal favorite. It’s the easiest way if you have a single required value you want to check, but often you want to validate more. Besides, I don’t consider it best practice to write logic in setters. The way used by frameworks like WCF Ria Services is the use of attributes on the properties. Instead of throwing exceptions you have to call the static method ValidateProperty on the Validator class. This call stays always the same for a particular property, not even when you change the attributes on the property. To mark a property “Required” you can use the RequiredAttribute. This is what the Name property is going to look like:   [Required] public string Name { get { return _name; } set { if (_name != value) { Validator.ValidateProperty(value, new ValidationContext(this, null, null){ MemberName = "Name" }); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } }   The ValidateProperty method takes the new value for the property and an instance of ValidationContext. The properties passed to the constructor of the ValidationContextclass are very straight forward. This part is the same every time. The only thing that changes is the MemberName property of the ValidationContext. Property has to hold the name of the property you want to validate. It’s the same value you provide the PropertyChangedEventArgs with. The System.ComponentModel.DataAnnotation contains eight different validation attributes including a base class to create your own. They are: RequiredAttribute Specifies that a value must be provided. RangeAttribute The provide value must fall in the specified range. RegularExpressionAttribute Validates is the value matches the regular expression. StringLengthAttribute Checks if the number of characters in a string falls between a minimum and maximum amount. CustomValidationAttribute Use a custom method to validate the value. DataTypeAttribute Specify a data type using an enum or a custom data type. EnumDataTypeAttribute Makes sure the value is found in a enum. ValidationAttribute A base class for custom validation attributes All of these will ensure that an validation exception is thrown, except the DataTypeAttribute. This attribute is used to provide some additional information about the property. You can use this information in your own code.   [Required] [Range(0,125,ErrorMessage = "Value is not a valid age")] public int Age {   It’s no problem to stack different validation attributes together. For example, when an Age is required and must fall in the range from 0 to 125:   [Required, StringLength(255,MinimumLength = 3)] public string Name {   Or in one row like this, for a required Name with at least 3 characters and a maximum of 255:   Delayed validation Having properties marked as required can be very useful. The only downside to the technique described earlier is that you have to change the value in order to get it validated. What if you start out with empty an empty entry form? All fields are empty and thus won’t be validated. With this small trick you can validate at the moment the user click the submit button.   <TextBox Width="150" x:Name="NameField" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True, UpdateSourceTrigger=Explicit}"/>   By default, when a TwoWay bound control looses focus the value is updated. When you added validation like I’ve shown you earlier, the value is validated. To overcome this, you have to tell the binding update explicitly by setting the UpdateSourceTrigger binding property to Explicit:   private void SubmitButtonClick(object sender, RoutedEventArgs e) { NameField.GetBindingExpression(TextBox.TextProperty).UpdateSource(); }   This way, the binding is in two direction but the source is only updated, thus validated, when you tell it to. In the code behind you have to call the UpdateSource method on the binding expression, which you can get from the TextBox.   Conclusion Data validation is something you’ll probably want on almost every entry form. I always thought it was hard to do, but it wasn’t. If you can throw an exception you can do validation. If you want to know anything more in depth about something I talked about in this article let me know. I might write an entire post to that.

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • Use Those Extra Mouse Buttons to Increase Efficiency

    - by Mark Virtue
    Did you know that the most commonly used mouse actions are clicking a window’s “Close” button (the X in the top-right corner), and clicking the “Back” button (in a browser and various other programs)?  How much time do you spend every day locating the Close button or the Back button with your mouse so that you can click on them?  And what about that mouse you’re using – how many buttons does it have, besides the two main ones?  Most mouses these days have at least four (including the scroll-wheel, which a lot of people don’t realize is also a button as well).  Why not assign those extra buttons to your most common mouse actions, and save yourself a bundle of mousing-around time every day? If your mouse was manufactured by one of the “premium” mouse manufacturers (Microsoft, Logitech, etc), it almost certain came with driver software to allow you to customize your mouse’s controls and take advantage of your mouse’s special features.  Microsoft, for example, provides driver software called IntelliPoint (link below), while Logitech provides SetPoint.  It’s possible that your mouse has some extra buttons but doesn’t come with its own driver software (the author is using a Microsoft Bluetooth Notebook Mouse 5000, which amazingly is not supported by the Microsoft IntelliPoint software!).  If your mouse falls into this category, you can use a marvelous free product called X-Mouse Button Control, from Highresolution Enterprises (link below).  It provides a truly amazing array of mouse configuration options, including assigning actions to buttons on a per-application basis. Once X-Mouse Button Control is downloaded, its setup process is quite straightforward. Once downloaded, you can start the program via Start / Highresolution Enterprises / X-Mouse Button Control.  You will find the program’s icon in the system tray: Right-click on the icon and select Setup from the pop-up menu.  The program’s configuration window appears: It’s extremely unlikely that we will want to change the functionality of our mouse’s two main buttons (left and right), so instead we’ll look at the rest of the options on the right side of the window.  The Middle Button refers to either the third, middle button (found on some old mouses), or the pressing of the wheel itself, as a button (if you didn’t know you could press your wheel like a button, try it out now).  Mouse Button 4 and Mouse Button 5 usually refer to the extra buttons found on the side of the mouse, often near your thumb. So what can we use these extra mouse buttons for?  Well, clearly Close and Back are two obvious candidates.  Each of these can be found by selecting them from the drop-down menu next to each button field: Once the two options are chosen, the window will look something like this: If you’re not interested in choosing Back or Close, you may like to try some of the other options in the list, including: Cut, Copy and Paste Undo Show the Desktop Next/Previous track (for media playback) Open any program Simulate any keystroke or combination of keystrokes ….and many other options.  Explore the drop-down list to see them all. You may decide, for example, that closing the current document (as opposed to the current program) would be a good use for Mouse Button 5.  In other words, we need to simulate the keypress of Ctrl-F4.  Let’s see how we achieve this. First we select Simulated Keystrokes from the drop-down list: The Simulated Keystrokes window opens: The instructions on the page are pretty comprehensive.  If you want to simulate the Ctrl-F4 keystroke, you need to type {CTRL}{F4} into the box: …and then click OK. Assigning Actions to Buttons on a Per-Application Basis One of the most powerful features of X-Mouse Button Control is the ability to assign actions to buttons on a per-application basis.  This means that if we have a particular program open, then our mouse will behave differently – our buttons will do different things. For example, when we have Windows Media Player open, for example, we may wish to have buttons assigned to Play/Pause, Next track and Previous track, as well as changing the volume with the mouse!  This is easy with X-Mouse Button Control.  We start by opening Windows Media Player.  This makes the next step easier.  Then we return to X-Mouse Button Control and add a new “configuration”.  This is done by clicking the Add button: A window opens containing a list of all running programs, including our recently opened Windows Media Player: We select Windows Media Player and click OK.  A new, blank “configuration” is created: We repeat the earlier steps to assign buttons to Play/Pause, Next track and Previous track, and assign scrolling the wheel to alter the volume:   To save all our changes and close the window, we click Apply. Now spend a few minutes thinking of all the applications you use the most, and what are the most common simple tasks you perform in each of those applications.  Those tasks are then perfect candidates for per-application button assignments. There are many more configuration options and capabilities of X-Mouse Button Control – too many to list here.  We encourage you to spend a bit of time exploring the Setup window.  Then, most important of all, don’t forget to use your new mouse buttons!  Get into the habit of using them, and then after a while you’ll start to wonder how you ever tolerated the laborious, tedious, time-consuming process of actually locating each window’s Close button… Download X-Mouse Button Control Highresolution Enterprise Similar Articles Productive Geek Tips Add Specialized Toolbar Buttons to Firefox the Easy WayBoost Your Mouse Pointing Accuracy in WindowsMake Mouse Navigation Faster in WindowsVista Style Popup Previews for Firefox TabsStupid Geek Tricks: Using the Quick Zoom Feature in Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Eine komplette Virtualisierungslandschaft auf dem eigenen Laptop – So geht’s

    - by Manuel Hossfeld
    Eine komplette Virtualisierungslandschaftauf dem eigenen Laptop – So geht’s Wenn man sich mit dem Virtualisierungsprodukt Oracle VM in der aktuellen Version 3.x näher befassen möchte, bietet es sich natürlich an, eine eigene Umgebung zu Lern- und Testzwecken zu installieren. Doch leichter gesagt als getan: Bei näherer Betrachtung der Architektur wird man schnell feststellen, dass mehrere Rechner benötigt werden, um überhaupt alle Komponenten abbilden zu können: Zum einen gilt es, den oder die OVM Server selbst zu installieren. Das ist recht leicht und schnell erledigt, aber da Oracle VM ein „Typ 1 Hypervisor ist“ - also direkt auf dem Rechner („bare metal“) installiert wird – ist der eigenen Arbeits-PC oder Laptop dafür recht ungeeignet. (Eine Dual-Boot Umgebung wäre zwar denkbar, aber recht unpraktisch.) Zum anderen wird auch ein Rechner benötigt, auf dem der OVM Manager installiert wird. Im Gegensatz zum OVM Server erfolgt dessen Installation nicht „bare metal“, sondern auf einem bestehenden Oracle Linux. Aber was tun, wenn man gerade keinen Linux-Server griffbereit hat und auch keine extra Hardware dafür opfern will? Möchte man alle Funktionen von Oracle VM austesten, so sollte man zusätzlich über einen Shared Storag everüfugen. Dieser kann wahlweise über NFS oder über ein SAN (per iSCSI oder FibreChannel) angebunden werden. Zwar braucht man zum Testen nicht zwingend entsprechende „echte“ Storage-Hardware, aber auch die „Simulation“ entsprechender Komponenten erfordert zusätzliche Hardware mit entsprechendem freien Plattenplatz.(Alternativ können auch fertige „Software Storage Appliances“ wie z.B. OpenFiler oder FreeNAS verwendet werden). Angenommen, es stehen tatsächlich keine „echte“ Server- und Storage Hardware zur Verfügung, so benötigt man für die oben genannten drei Punkte  drei bzw. vier Rechner (PCs, Laptops...) - je nachdem ob man einen oder zwei OVM Server starten möchte. Erfreulicherweise geht es aber auch mit deutlich weniger Aufwand: Wie bereits kurz im Blogpost anlässlich des letzten OVM-Releases 3.1.1 beschrieben, ist die aktuelle Version in der Lage, selbst vollständig innerhalb von VirtualBox als Gast zu laufen. Wer bei dieser „doppelten Virtualisierung“ nun an das Prinzip der russischen Matroschka-Puppen denkt, liegt genau richtig. Oracle VM VirtualBox stellt dabei gewissermaßen die äußere Hülle dar – und da es sich bei VirtualBox im Gegensatz zu Oracle VM Server um einen „Typ 2 Hypervisor“ handelt, funktioniert dieser Ansatz auch auf einem „normalen“ Arbeits-PC bzw. Laptop, ohne dessen eigentliche Betriebsystem komplett zu überschreiben. Doch das beste dabei ist: Die Installation der jeweiligen VirtualBox VMs muss man nicht selber durchführen. Der OVM Manager als auch der OVM Server stehen bereits als vorgefertigte „VirtualBox Appliances“ im Oracle Technology Network zum Download zur Verfügung und müssen im Grunde nur noch importiert und konfiguriert werden. Das folgende Schaubild verdeutlicht das Prinzip: Die dunkelgrünen Bereiche stellen jeweils Instanzen der eben erwähnten VirtualBox Appliances für OVM Server und OVM Manager dar. (Hier im Bild sind zwei OVM Server zu sehen, als Minimum würde natürlich auch einer genügen. Dann können aber viele Features wie z.B. OVM HA nicht ausprobieren werden.) Als cleveren Trick zur Einsparung einer weiteren VM für Storage-Zwecke hat Wim Coekaerts (Senior Vice President of Linux and Virtualization Engineering bei Oracle), der „Erbauer“ der VirtualBox Appliances, die OVM Manager Appliance bereits so vorbereitet, dass diese gleichzeitig als NFS-Share (oder ggf. sogar als iSCSI Target) dienen kann. Dies beschreibt er auch kurz auf seinem Blog. Die hellgrünen Ovale stellen die VMs dar, welche dann innerhalb einer der virtualisierten OVM Server laufen können. Aufgrund der Tatsache, dass durch diese „doppelte Virtualisierung“ die Fähigkeit zur Hardware-Virtualisierung verloren geht, können diese „Nutz-VMs“ demzufolge nur paravirtualisiert sein (PVM). Die hier in blau eingezeichneten Netzwerk-Schnittstellen sind virtuelle Interfaces, welche beliebig innerhalb von VirtualBox eingerichtet werden können. Wer die verschiedenen Netzwerk-Rollen innerhalb von Oracle VM im Detail ausprobieren will, kann hier natürlich auch mehr als zwei dieser Interfaces konfigurieren. Die Vorteile dieser Lösung für Test- und Demozwecke liegen auf der Hand: Mit lediglich einem PC bzw. Laptop auf dem VirtualBox installiert ist, können alle oben genannten Komponenten installiert und genutzt werden – genügend RAM vorausgesetzt. Als Minimum darf hier 8GB gelten. Soll auf der „Host-Umgebung“ (also dem PC auf dem VirtualBox läuft) nebenbei noch gearbeiten werden und/oder mehrere „Nutz-VMs“ in dieser simulierten OVM-Server-Umgebung laufen, empfehlen sich natürlich eher 16GB oder mehr. Da die nötigen Schritte zum Installieren und initialen Konfigurieren der Umgebung ausführlich in einem entsprechenden Paper beschrieben sind, möchte ich im Rest dieses Artikels noch einige zusätzliche Tipps und Details erwähnen, welche einem das Leben etwas leichter machen können: Um möglichst entstpannt und mit zusätzlichen „Sicherheitsnetz“ an die Konfiguration der Umgebung herangehen zu können, empfiehlt es sich, ausgiebigen Gebrauch von der in VirtualBox eingebauten Funktionalität der VM Snapshots zu machen. Dies ermöglicht nicht nur ein Zurücksetzen falls einmal etwas schiefgehen sollte, sondern auch ein beliebiges Wiederholen von bereits absolvierten Teilschritten (z.B. um eine andere Idee oder Variante der Umgebung auszuprobieren). Sowohl bei den gerade erwähnten Snapshots als auch bei den VMs selbst sollte man aussagekräftige Namen verwenden. So ist sichergestellt, dass man nicht durcheinander kommt und auch nach ein paar Wochen noch weiß, welche Umgebung man da eigentlich vor sich hat. Dies beinhaltet auch die genaue Versions- und Buildnr. des jeweiligen OVM-Releases. (Siehe dazu auch folgenden Screenshot.) Weitere Informationen und Details zum aktuellen Zustand sowie Zweck der jeweiligen VMs kann in dem oft übersehenen Beschreibungsfeld hinterlegt werden. Es empfiehlt sich, bereits VOR der Installation einen Notizzettel (oder eine Textdatei) mit den geplanten IP-Adressen und Namen für die VMs zu erstellen. (Nicht vergessen: Auch der Server Pool benötigt eine eigene IP.) Dabei sollte man auch nochmal die tatsächlichen Netzwerke der zu verwendenden Virtualbox-Interfaces prüfen und notieren. Achtung: Es gibt im Rahmen der Installation einige Passworte, die vom Nutzer gesetzt werden können – und solche, die zunächst fest eingestellt sind. Zu letzterem gehört das Passwort für den ovs-agent sowie den root-User auf den OVM Servern, welche beide per Default „ovsroot“ lauten. (Alle weiteren Passwort-Informationen sind in dem „Read me first“ Dokument zu finden, welches auf dem Desktop der OVM Manager VM liegt.) Aufpassen muss man ggf. auch in der initialen „Interview-Phase“ welche die VirtualBox VMs durchlaufen, nachdem sie das erste mal gebootet werden. Zu diesem Zeitpunkt ist nämlich auf jeden Fall noch die amerikanische Tastaturbelegung aktiv, so dass man z.B. besser kein „y“ und „z“ in seinem selbst gewählten Passwort verwendet. Aufgrund der Tatsache, dass wie oben erwähnt der OVM Manager auch gleichzeitig den Shared Storage bereitstellt, sollte darauf geachtet werden, dass dessen VM vor den OVM Server VMs gestartet wird. (Andernfalls „findet“ der dem OVM Server Pool zugrundeliegende Cluster sein sog. „Server Pool File System“ nicht.)

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23  | Next Page >