Search Results

Search found 39 results on 2 pages for 'romkyns'.

Page 1/2 | 1 2  | Next Page >

  • Why don’t UI frameworks use generics?

    - by romkyns
    One way of looking at type safety is that it adds automatic tests all over your code that stop some things breaking in some ways. One of the tools that helps this in .NET is generics. However, both WinForms and WPF are generics-free. There is no ListBox<T> control, for example, which could only show items of the specified type. Such controls invariably operate on object instead. Why are generics and not popular with UI framework developers?

    Read the article

  • Separating a "wad of stuff" utility project into individual components with "optional" dependencies

    - by romkyns
    Over the years of using C#/.NET for a bunch of in-house projects, we've had one library grow organically into one huge wad of stuff. It's called "Util", and I'm sure many of you have seen one of these beasts in your careers. Many parts of this library are very much standalone, and could be split up into separate projects (which we'd like to open-source). But there is one major problem that needs to be solved before these can be released as separate libraries. Basically, there are lots and lots of cases of what I might call "optional dependencies" between these libraries. To explain this better, consider some of the modules that are good candidates to become stand-alone libraries. CommandLineParser is for parsing command lines. XmlClassify is for serializing classes to XML. PostBuildCheck performs checks on the compiled assembly and reports a compilation error if they fail. ConsoleColoredString is a library for colored string literals. Lingo is for translating user interfaces. Each of those libraries can be used completely stand-alone, but if they are used together then there are useful extra features to be had. For example, both CommandLineParser and XmlClassify expose post-build checking functionality, which requires PostBuildCheck. Similarly, the CommandLineParser allows option documentation to be provided using the colored string literals, requiring ConsoleColoredString, and it supports translatable documentation via Lingo. So the key distinction is that these are optional features. One can use a command line parser with plain, uncolored strings, without translating the documentation or performing any post-build checks. Or one could make the documentation translatable but still uncolored. Or both colored and translatable. Etc. Looking through this "Util" library, I see that almost all potentially separable libraries have such optional features that tie them to other libraries. If I were to actually require those libraries as dependencies then this wad of stuff isn't really untangled at all: you'd still basically require all the libraries if you want to use just one. Are there any established approaches to managing such optional dependencies in .NET?

    Read the article

  • Should my program "be lenient" in what it accepts and "discard faulty input silently"?

    - by romkyns
    I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a "be lenient" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this. The second part of the maxim is "discard faulty input silently, without returning an error message unless this is required by the specification", and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? Taken to the extreme, if Excel followed this maxim and I gave it an exe file to open, it would just show a blank spreadsheet without even mentioning that anything went wrong. Is this really a good principle to follow?

    Read the article

  • Should a server "be lenient" in what it accepts and "discard faulty input silently"?

    - by romkyns
    I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a "be lenient" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this. The second part of the maxim is "discard faulty input silently, without returning an error message unless this is required by the specification", and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? The original question said "program", and I take everyone's point about that. It can make sense for programs to be lenient. What I really meant, however, is APIs: interfaces exposed to other programs, rather than people. HTTP is an example. The protocol is an interface that only other programs use. People never directly provide the dates that go into headers like "If-Modified-Since". So, the question is: should the server implementing a standard be lenient and allow dates in several other formats, in addition to the one that's actually required by the standard? I believe the "be lenient" is supposed to apply to this situation, rather than human interfaces. If the server is lenient, it might seem like an overall improvement, but I think in practice it only leads to client implementations that end up relying on the leniency and thus failing to work with another server that's lenient in slightly different ways. So, should a server exposing some API be lenient or is that a very bad idea? Now onto lenient handling of user input. Consider YouTrack (a bug tracking software). It uses a language for text entry that is reminiscent of Markdown. Except that it's "lenient". For example, writing - foo - bar - baz is not a documented way of creating a bulleted list, and yet it worked. Consequently, it ended up being used a lot throughout our internal bugtracker. Next version comes out, and this lenient feature starts working slightly differently, breaking a bunch of lists that (mis)used this (non)feature. The documented way to create bulleted lists still works, of course. So, should my software be lenient in what user inputs it accepts?

    Read the article

  • In hindsight, is basing XAML on XML a mistake or a good approach?

    - by romkyns
    XAML is essentially a subset of XML. One of the main benefits of basing XAML on XML is said to be that it can be parsed with existing tools. And it can, to a large degree, although the (syntactically non-trivial) attribute values will stay in text form and require further parsing. There are two major alternatives to describing a GUI in an XML-derived language. One is to do what WinForms did, and describe it in real code. There are numerous problems with this, though it’s not completely advantage-free (a question to compare XAML to this approach). The other major alternative is to design a completely new syntax specifically tailored for the task at hand. This is generally known as a domain-specific language. So, in hindsight, and as a lesson for the future generations, was it a good idea to base XAML on XML, or would it have been better as a custom-designed domain-specific language? If we were designing an even better UI framework, should we pick XML or a custom DSL? Since it’s much easier to think positively about the status quo, especially one that is quite liked by the community, I’ll give some example reasons for why building on top of XML might be considered a mistake. Basing a language off XML has one thing going for it: it’s much easier to parse (the core parser is already available), requires much, much less design work, and alternative parsers are also much easier to write for 3rd party developers. But the resulting language can be unsatisfying in various ways. It is rather verbose. If you change the type of something, you need to change it in the closing tag. It has very poor support for comments; it’s impossible to comment out an attribute. There are limitations placed on the content of attributes by XML. The markup extensions have to be built "on top" of the XML syntax, not integrated deeply and nicely into it. And, my personal favourite, if you set something via an attribute, you use completely different syntax than if you set the exact same thing as a content property. It’s also said that since everyone knows XML, XAML requires less learning. Strictly speaking this is true, but learning the syntax is a tiny fraction of the time spent learning a new UI framework; it’s the framework’s concepts that make the curve steep. Besides, the idiosyncracies of an XML-based language might actually add to the "needs learning" basket. Are these disadvantages outweighted by the ease of parsing? Should the next cool framework continue the tradition, or invest the time to design an awesome DSL that can’t be parsed by existing tools and whose syntax needs to be learned by everyone? P.S. Not everyone confuses XAML and WPF, but some do. XAML is the XML-like thing. WPF is the framework with support for bindings, theming, hardware acceleration and a whole lot of other cool stuff.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Can the "Documents" standard folder be rescued and how?

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary?

    Read the article

  • Why are cryptic short identifiers still so common in low-level programming?

    - by romkyns
    There used to be very good reasons for keeping instruction / register names short. Those reasons no longer apply, but short cryptic names are still very common in low-level programming. Why is this? Is it just because old habits are hard to break, or are there better reasons? For example: Atmel ATMEGA32U2 (2010?): TIFR1 (instead of TimerCounter1InterruptFlag), ICR1H (instead of InputCapture1High), DDRB (instead of DataDirectionPortB), etc. .NET CLR instruction set (2002): bge.s (instead of branch-if-greater.signed), etc. Aren't the longer, non-cryptic names easier to work with?

    Read the article

  • Quickly switch Win7 volume normalization on/off?

    - by romkyns
    Is there some way to quickly toggle the state of volume normalization in Windows 7? When it's off watching movies late is tricky, and when it's on it messes with music in a bad way. It's a great feature, but argh, it requires me to make my way through so many dialogs... Any solution that requires no more than a couple of clicks or keystrokes is welcome - shortcuts, AutoHotkey, tray icon apps.

    Read the article

  • Password-protected sharing allows access to users who have no account?

    - by romkyns
    Running Win7 on two computers in my LAN. Computer A has password-protected sharing enabled, and shares a folder. It has a single user account "Bob", and the Guest account is turned off. The network is workgroup-based. According to the descriptions of the "password-protected sharing" I could find, the only people who can access the shared folder via the LAN are those who know the username+password for the "Bob" account. However a second computer on the LAN is able to view this shared folder by simply browsing to Computer A. They don't need to enter any passwords or anything. The only user account registered on that PC is called "Jim", and has a different password from "Bob". How on earth is computer B able to view this shared folder? Is the popular description of the "password-protected sharing" feature inaccurate / did I misunderstand it big time? P.S. There is a possibility that the password for "Bob" has been entered on that PC once, and possibly the "remember password" box was checked. I've looked in the "Credential Manager" on both computers and there is nothing saved anywhere.

    Read the article

  • How to change I/O priority of a process or thread in Win7?

    - by romkyns
    Process Explorer is able to show the effective IO priority of a given thread, but not change it. Seeing as IO priority support is a comparatively new feature, most programs don't set their own IO priorities. It appears that by default the IO priority is derived from the thread priority (rather than process priority), which Process Explorer can't modify either. Are there any other tools out there that can help me change the IO priority of a given thread / all threads of a given process?

    Read the article

  • Is exFAT safe to unplug without unmounting first?

    - by romkyns
    I'm hitting the 4GB limit of FAT32 on USB drives more and more often. However, being able to unplug the device without unmounting it first is a must have for me. I've noticed exFAT recently, however I couldn't find any info on whether drives formatted with exFAT can be unplugged safely without unmounting. Can they?

    Read the article

  • My computer is listed twice in the Network view; network shares are not accessible

    - by romkyns
    I have a couple of network shares set up on my Win7 machine. They've been in constant use, from that same machine on which they're set up. One morning they just randomly stopped working: When I went looking for what was wrong, I noticed that I also had my PC listed twice in the Network view in Explorer: "Sirius" is the name of the PC on which these screenshots were taken. I may have installed some windows updates around the time this happened. I have since tried rebooting and installing all the latest updates, to no avail. I've also removed the share in question and re-added it, making sure I give all rights to everyone. I'm an administrator on this machine, but I can't access the administrative shares (\\SIRIUS\c$) either, with the same message. I can access \\localhost\AcronisImages and \\localhost\c$, and I can ping sirius. Any ideas?

    Read the article

  • How do you sync photos with your family?

    - by romkyns
    We've always been trying to share photos within our family, despite all living in different countries. This has been very challenging. We have about 50GB of photos that we share with each other. Everyone organizes them differently, so rsync/Syncplicity don't work. Everyone likes their photos on the fast hard drive rather than a slow website with reduced quality, so online sharing websites are a no-go. So far we have been essentially syncing them manually, via a shared folder that new photos are placed to and collected from. This is laborious and prone to errors, that usually leave one of us without all the photos without even knowing. To those who are in a similar situation, how do you solve this?

    Read the article

  • What do you do when you need whole-words search in Firefox?

    - by romkyns
    Example: I am looking to see if "arg" is a special keyword in Lua. I go to the Lua reference manual at http://www.lua.org/manual/5.1/manual.html. I search for "arg". I find hundreds of occurrences of the word "argument(s)" or "vararg". Any ideas? I know Firefox won't implement whole words as a core feature (something about cluttering up the ui... argh...), and I couldn't find a good addon that implemented this well.

    Read the article

  • Is there a CPU that can be described as "Celeron D 4xx model"?

    - by romkyns
    The "D" letter after Celeron appears to only be used for processors numbered with 3xx. Celerons of the 4xx series do not seem to have the "D". And yet I am looking at a motherboard described as supporting these processors: Intel Celeron D 3xx and 4xx models Intel Pentium 4 5xx and 6xx models Intel Pentium D 8xx and 9xx models Intel Core 2 Duo models with LGA775 Is this compatible with a Celeron 450, sSpec SLAFZ, despite not having a "D" in its name?

    Read the article

  • Command-line tool to deduplicate a single humongous file?

    - by romkyns
    I make regular snapshots of my VM using a nightly script. These backups are compressed using WinRAR and do shrink considerably, but I suspect it's not as efficient as had the file had been deduplicated first (just a hunch which I'm hoping to test). So instead of compressing the VHD itself, I would like to deduplicate the single file first, and then compress the output of the deduplicator. Is anyone aware of such a CLI tool?

    Read the article

  • In vim, prevent caret moving back when I leave edit mode?

    - by romkyns
    In vim, if I enter and leave edit mode without doing anything, the caret ends up one character to the left. And if I enter and leave append mode, the caret moves forwards and then backwards. Any way to configure vim to leave the caret alone in these cases? Ideally I just want to always enter append mode, but without moving the caret when I enter or exit the mode. (Currently I usually use insert mode because it doesn't mess up my caret position upon entry. That is, except when I need to append to the end of the line, in which case I swear at vim for behaving in such an archaic fashion, press Esc and enter append mode.)

    Read the article

  • How to stream multiple files on demand in VLC?

    - by romkyns
    Is there any way at all that I can set up VLC on a server PC in such a way that I can access a list of all my videos from another PC, and pick one to be streamed on demand? I've been pointed at this streaming guide (pdf), but it's pretty useless. For a start, most of the menus in those screenshots don't match the actual current version VLC, and then it sort of assumes you already know what you're doing. So far I managed to figure out how to stream a single file, which I must choose before watching on the server PC - pretty useless if you ask me! The impenetrable "UI" doesn't help either... (P.S. The reason I'm going for streaming rather than the very simple to set up network drive is described in this question)

    Read the article

  • How to export an image from Photoshop to PDF while preserving the exact size?

    - by romkyns
    There used to be a PDF export option in Photoshop, but it's gone in CS4. What replaced it is Bridge, however no matter what I do, Bridge ends up resizing my image. The physical dimensions (cm/inches) in the final PDF are not what they are in Photoshop. Any tips on exporting an image without messing up its size? (Clarification: I want the final PDF to contain a page of the size I specify, with a white background, and my image positioned somewhere on this page such that the image width/height in cm is exactly the same in the PDF as it was in Photoshop.)

    Read the article

  • Performance of DrawingVisual vs Canvas.OnRender for lots of constantly changing shapes

    - by romkyns
    I'm working on a game-like app which has up to a thousand shapes (ellipses and lines) that constantly change at 60fps. Having read an excellent article on rendering many moving shapes, I implemented this using a custom Canvas descendant that overrides OnRender to do the drawing via a DrawingContext. The performance is quite reasonable, although the CPU usage stays high. However, the article suggests that the most efficient approach for constantly moving shapes is to use lots of DrawingVisual instances instead of OnRender. Unfortunately though it doesn't explain why that should be faster for this scenario. Changing the implementation in this way is not a small effort, so I'd like to understand the reasons and whether they are applicable to me before deciding to make the switch. Why could the DrawingVisual approach result in lower CPU usage than the OnRender approach in this scenario?

    Read the article

  • How to draw a full ellipse in a StreamGeometry in WPF?

    - by romkyns
    The only method in a StreamGeometryContext that seems related to ellipses is the ArcTo method. Unfortunately it is heavily geared to joining lines rather than drawing ellipses. In particular, the position of the arc is determined by a starting and ending point. For a full ellipse the two coincide obviously, and the exact orientation becomes undefined. So far the best way of drawing an ellipse centered on 100,100 of size 10,10 that I found is like this: using (var ctx = geometry.Open()) { ctx.BeginFigure(new Point(100+5, 100), isFilled: true, isClosed: true); ctx.ArcTo( new Point(100 + 5*Math.Cos(0.01), 100 + 5*Math.Sin(0.01)), // need a small angle but large enough that the ellipse is positioned accurately new Size(10/2, 10/2), // docs say it should be 10,10 but in practice it appears that this should be half the desired width/height... 0, true, SweepDirection.Counterclockwise, true, true); } Which is pretty ugly, and also leaves a small "flat" area (although not visible at normal zoom levels). How else might I draw a full ellipse using StreamGeometryContext?

    Read the article

  • Tooltips with infinite timeout?

    - by romkyns
    I'm thinking of setting the timeout on all my tooltips in a WinForms application to infinity (or an extremely large value). The motivation is that it's annoying for the user if the tooltip disappears while I'm still reading it, without providing any extra value whatsoever as far as I can tell. Normally I wouldn't ask something like this on StackOverflow, but the overwhelming majority of all software sets timeouts on tooltips, so it makes me wonder whether perhaps there is some important consideration I'm missing? Or is this just an old convention that nobody gives further thought to? If you would hate infinite timeout as opposed to a short timeout, please explain why. (If you just think tooltips are a bad idea altogether then that's a separate consideration; this question is specifically about the infinite timeout.)

    Read the article

1 2  | Next Page >