Search Results

Search found 1760 results on 71 pages for 'modern'.

Page 64/71 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • How are you using IronPython?

    - by Will Dean
    I'm keen to drink some modern dynamic language koolaid, so I've believed all the stuff on Michael Foord's blog and podcasts, I've bought his book (and read some of it), and I added an embedded IPy runtime to a large existing app a year or so ago (though that was for someone else and I didn't really use it myself). Now I need to do some fairly simple code generation stuff, where I'm going to call a few methods on a few .net objects (custom, C#-authored objects), create a few strings, write some files, etc. The experience of trying this leaves me feeling like the little boy who thinks he's the only one who can see that The Emperor has no clothes on. If you're using IronPython, I'd really appreciate knowing how you deal with the following aspects of it: Code editing - do you use the .NET framework without Intellisense? Refactoring - I know a load of 'refactoring' is about working around language-related busywork, so if Python is sufficiently lightweight then we won't need that, But things like renames seem to me to be essential to iteratively developing quality code regardless of language. Crippling startup time - One of the things which is supposed to be good about interpreted languages is the lack of compile time leading to fast interactive development. Unfortunately I can compile a C# application and launch it quicker than IPy can start up. Interactive hacking - the IPy console/repl is supposed to be good for this, but I haven't found a good way to take the code you've interactively arrived at and persist it into a file - cut and paste from the console is fairly miserable. And the console seems to hold references to .NET assemblies you've imported, so you have to quit it and restart it if you're working on the C# stuff as well. Hacking on C# in something like LinqPad seems a much faster and easier way to try things out (and has proper Intellisense). Do you use the console? Debugging - what's the story here? I know someone on the IPy team is working on a command-line hobby-project, but let's just say I'm not immediately attracted to a command line debugger. I don't really need a debugger from little Python scripts, but I would if I were to use IPy for scripting unit tests, for example. Unit testing - I can see that dynamic languages could be great for this, but is there any IDE test-runner integration (like for Resharper, etc). The Foord book has a chapter about this, which I'll admit I have not yet read properly, but it does seem to involve driving a console-mode test-runner from the command prompt, which feels to be an enormous step back from using an integrated test runner like TestDriven.net or Resharper. I really want to believe in this stuff, so I am still working on the assumption that I've missed something. I would really like to know how other people are dealing with IPy, particularly if they're doing it in a way which doesn't feel like we've just lost 15 years'-worth of tool development.

    Read the article

  • how to implement a "soft barrier" in multithreaded c++

    - by Jason
    I have some multithreaded c++ code with the following structure: do_thread_specific_work(); update_shared_variables(); //checkpoint A do_thread_specific_work_not_modifying_shared_variables(); //checkpoint B do_thread_specific_work_requiring_all_threads_have_updated_shared_variables(); What follows checkpoint B is work that could have started if all threads have reached only checkpoint A, hence my notion of a "soft barrier". Typically multithreading libraries only provide "hard barriers" in which all threads must reach some point before any can continue. Obviously a hard barrier could be used at checkpoint B. Using a soft barrier can lead to better execution time, especially since the work between checkpoints A and B may not be load-balanced between the threads (i.e. 1 slow thread who has reached checkpoint A but not B could be causing all the others to wait at the barrier just before checkpoint B). I've tried using atomics to synchronize things and I know with 100% certainty that is it NOT guaranteed to work. For example using openmp syntax, before the parallel section start with: shared_thread_counter = num_threads; //known at compile time #pragma omp flush Then at checkpoint A: #pragma omp atomic shared_thread_counter--; Then at checkpoint B (using polling): #pragma omp flush while (shared_thread_counter > 0) { usleep(1); //can be removed, but better to limit memory bandwidth #pragma omp flush } I've designed some experiments in which I use an atomic to indicate that some operation before it is finished. The experiment would work with 2 threads most of the time but consistently fail when I have lots of threads (like 20 or 30). I suspect this is because of the caching structure of modern CPUs. Even if one thread updates some other value before doing the atomic decrement, it is not guaranteed to be read by another thread in that order. Consider the case when the other value is a cache miss and the atomic decrement is a cache hit. So back to my question, how to CORRECTLY implement this "soft barrier"? Is there any built-in feature that guarantees such functionality? I'd prefer openmp but I'm familiar with most of the other common multithreading libraries. As a workaround right now, I'm using a hard barrier at checkpoint B and I've restructured my code to make the work between checkpoint A and B automatically load-balancing between the threads (which has been rather difficult at times). Thanks for any advice/insight :)

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • How to "enable" HTML5 elements in IE that were inserted by AJAX call?

    - by Gidon
    IE does not work good with unknown elements (ie. HTML5 elements), one cannot style them , or access most of their props. Their are numerous work arounds for this for example: http://remysharp.com/2009/01/07/html5-enabling-script/ The problem is that this works great for static HTML that was available on page load, but when one creates HTML5 elements afterward (for example AJAX call containing them, or simply creating with JS), it will mark these newly added elements them as HTMLUnknownElement as supposed to HTMLGenericElement (in IE debugger). Does anybody know a work around for that, so that newly added elements will be recognized/enabled by IE? Here is a test page: <html><head><title>TIME TEST</title> <!--[if IE]> <script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> </head> <body> <time>some time</time> <hr> <script type="text/javascript"> $("time").text("WORKS GREAT"); $("body").append("<time>NEW ELEMENT</time>"); //simulates AJAX callback insertion $("time").text("UPDATE"); </script> </body> </html> In IE you will see the: UPDATE , and NEW ELEMENT. In any other modern browser you will see UPDATE, and UPDATE Solution Using the answer provided I came up with the following piece of javascript to HTML5 enable a whole bunch of elements returned by my ajax call: (function ($) { jQuery.fn.html5Enable = function () { if ($.browser.msie) { $("abbr, article, aside, audio, canvas, details, figcaption, figure, footer, header, hgroup, mark, menu, meter, nav, output, progress, section, summary, time, video", this).replaceWith(function () { if (this.tagName == undefined) return ""; var el = $(document.createElement(this.tagName)); for (var i = 0; i < this.attributes.length; i++) el.attr(this.attributes[i].nodeName, this.attributes[i].nodeValue); el.html(this.innerHtml); return el; }); } return this; }; })(jQuery); Now this can be called whenever you want to append something: var el = $(AJAX_RESULT_OR_HTML_STRING); el.html5Enable(); $("SOMECONTAINER").append(el); See http://code.google.com/p/html5shiv/issues/detail?id=4 for an explanation about what this plugin doesn't do.

    Read the article

  • c++ class member functions instatiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (vc++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Commitment to Zend Framework - any arguments against?

    - by Pekka
    I am refurbishing a big CMS that I have been working on for quite a number of years now. The product itself is great, but some components, the Database and translation classes for example, need urgent replacing - partly self-made as far back as 2002, grown into a bit of a chaos over time, and might have trouble surviving a security audit. So, I've been looking closely at a number of frameworks (or, more exactly, component Libraries, as I do not intend to change the basic structure of the CMS) and ended up with liking Zend Framework the best. They offer a solid MVC model but don't force you into it, and they offer a lot of professional components that have obviously received a lot of attention (Did you know there are multiple plurals in Russian, and you can't translate them using a simple ($number == 0) or ($number > 1) switch? I didn't, but Zend_Translate can handle it. Just to illustrate the level of thorougness the library seems to have been built with.) I am now literally at the point of no return, starting to replace key components of the system by the Zend-made ones. I'm not really having second thoughts - and I am surely not looking to incite a flame war - but before going onward, I would like to step back for a moment and look whether there is anything speaking against tying a big system closely to Zend Framework. What I like about Zend: As far as I can see, very high quality code Extremely well documented, at least regarding introductions to how things work (Haven't had to use detailed API documentation yet) Backed by a company that has an interest in seeing the framework prosper Well received in the community, has a considerable user base Employs coding standards I like Comes with a full set of unit tests Feels to me like the right choice to make - or at least, one of the right choices - in terms of modern, professional PHP development. I have been thinking about encapsulating and abstracting ZF's functionality into own classes to be able to switch frameworks more easily, but have come to the conclusion that this would not be a good idea because: it would be an unnecessary level of abstraction it could cost performance the big advantage of using a framework - the existence of a developer base that is familiar with its components - would partly be cancelled out therefore, the commitment to ZF would be a deep one. Thus my question: Is there anything substantial speaking against committing to the Zend Framework? Do you have insider knowledge of plans of Zend Inc.'s to go evil in 2011, and make it a closed source library? Is Zend Inc. run by vampires? Are there conceptual flaws in the code base you start to notice when you've transitioned all your projects to it? Is the appearance of quality code an illusion? Does the code look good, but run terribly slow on anything below my quad-core workstation?

    Read the article

  • Why bubling not working

    - by slier
    I just want to understand how capturing and bubbling work. Unfortunately this code just work in IE, and not working in firefox. When i click on div3, it just stop there.it not bubling up toward body element. Can somebody enlighten me. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type" /> <title>Untitled 1</title> <script type="text/javascript"> var addEvent = function(elem, event, func, capture){ if(typeof(window.event) != 'undefined'){ elem.attachEvent('on' + event, func); } else{ elem.addEventListener(event, func, capture); } } var bodyFunc = function(){ alert('In element body') } var div1Func = function(){ alert('In element div1') } var div2Func = function(){ alert('In element div2') } var div3Func = function(){ alert('In element div3') } var init = function(){ addEvent(document.getElementsByTagName('body')[0], 'click', bodyFunc, true); addEvent(document.getElementById('div1'), 'click', div1Func, true); addEvent(document.getElementById('div2'), 'click', div2Func, true); addEvent(document.getElementById('div3'), 'click', div3Func, true); } addEvent(window, 'load', init, false) </script> </head> <body> <h1>Using the Modern Event Model</h1> <div id="div1" style="border:1px solid #000000;padding:10pt;background:cornsilk"> <p>This is div 1</p> <div id="div2" style="border:1px solid #000000;padding:10pt;background:gray"> <p>This is div 2</p> <div id="div3" style="border:1px solid #000000;padding:10pt; background:lightBlue"> <p>This is div 3</p> </div> </div> </div> </body> </html>

    Read the article

  • Can sorting Japanese kanji words be done programatically?

    - by Mason
    I've recently discovered, to my astonishment (having never really thought about it before), machine-sorting Japanese proper nouns is apparently not possible. I work on an application that must allow the user to select a hospital from a 3-menu interface. The first menu is Prefecture, the second is City Name, and the third is Hospital. Each menu should be sorted, as you might expect, so the user can find what they want in the menu. Let me outline what I have found, as preamble to my question: The expected sort order for Japanese words is based on their pronunciation. Kanji do not have an inherent order (there are tens of thousands of Kanji in use), but the Japanese phonetic syllabaries do have an order: ???????????????????... and on for the fifty traditional distinct sounds (a few of which are obsolete in modern Japanese). This sort order is called ???? (gojuu on jun , or '50-sound order'). Therefore, Kanji words should be sorted in the same order as they would be if they were written in hiragana. (You can represent any kanji word in phonetic hiragana in Japanese.) The kicker: there is no canonical way to determine the pronunciation of a given word written in kanji. You never know. Some kanji have ten or more different pronunciations, depending on the word. Many common words are in the dictionary, and I could probably hack together a way to look them up from one of the free dictionary databases, but proper nouns (e.g. hospital names) are not in the dictionary. So, in my application, I have a list of every prefecture, city, and hospital in Japan. In order to sort these lists, which is a requirement, I need a matching list of each of these names in phonetic form (kana). I can't come up with anything other than paying somebody fluent in Japanese (I'm only so-so) to manually transcribe them. Before I do so though: Is it possible that I am totally high on fire, and there actually is some way to do this sorting without creating my own mappings of kanji words to phonetic readings, that I have somehow overlooked? Is there a publicly available mapping of prefecture/city names, from the government or something? That would reduce the manual mapping I'd need to do to only hospital names. Does anybody have any other advice on how to approach this problem? Any programming language is fine--I'm working with Ruby on Rails but I would be delighted if I could just write a program that would take the kanji input (say 40,000 proper nouns) and then output the phonetic representations as data that I could import into my Rails app. ??????????

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • C++ class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • make img height 100% of td

    - by kristina childs
    I'm creating an HTML email and since background images can't be used on anything but <body> thought I could get around this by making a border image 100% height within a cell. Perhaps it was wishful thinking? I've searched at the solutions that worked in the past no longer work in modern browsers. Is there any special trick to making this happen without setting a hard height for the cell? Here are the things I've tried so far: <td width="25" style="margin:0; padding:0;"> <img src="http://www.mysite.com/images/side-left.jpg" width="25" height="100%" alt="border" style="margin:0; padding:0; display: block;" /> </td> stretches the image to 100% height of the entire table (even though the table is nested in a <td width="25" height="100%" style="margin:0; padding:0;"> <div style="height:100%; diplay: block;"> <img src="http://www.mysite.com/images/side-left.jpg" width="25" height="100%" alt="border" style="margin:0; padding:0; display: block;" /> </div> </td> ditto <td width="25" height="1" style="margin:0; padding:0;"> <div style="height:100%; diplay: block;"> <img src="http://www.mysite.com/images/side-left.jpg" width="25" height="100%" alt="border" style="margin:0; padding:0; display: block;" /> </div> </td> setting a smaller td size does not force it to strectch as expected. bummer.

    Read the article

  • organizing unit test

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewing developers?

    Read the article

  • Windows MAchine Debugging

    - by PrettyFlower
    I've been learning how to program for Windows for some time now and am getting pretty comfy with COM. I had thought to go over to Linux and do some C++ programming there and I wished to run Rosetta Commons so I installed Fedora. I had tried installing Ubuntu a few months ago and things got messy. I had a glitch, maybe caused by one of the live cd creators, my video card or something I don't know. Who Crashed suggested it was my video card and I had regular messages about ntfs.sys and page file issues. At any rate I just installed Fedora and the same thing is happening again. I would like to think with the twenty five years of doing this that I might finally make some headway into debugging my system. I think I may have overlooked a lot of what could be done in favor of simply uninstalling, reinstalling and formatting and starting from scratch. I have opened up the folder windows debugging tools, quite accidentally and just before I was going to clean sweep again, and I found KD and WinDbg. I had never seen these before and I felt that maybe I should look into this. I am quite familiar with the modern machine that is known as the computer, I know what a Kernel is and am now pretty familiar with at the very least Windows Operating System Services. I wish to begin tracking my own machines errors. I understand that most kernel debugging is done on a second machine but I don't have one. And also I understand the goal of the debugger seems to be less about run of the mill errors and more about development time strategies but I'm sure there is more to this. This is my first go at this and I thought maybe I could get some suggestions on where to go from here. I would really like to learn ways to fix my machine and also maybe pick up some tricks on the dev side as well. I hope this isn't too broad a question or too generalized. I'm really just looking for the keywords and an overview of the more routine strategies used. thx

    Read the article

  • creating a heirarchy of terminals or workspaces

    - by intuited
    <rant This question occurred to me ('occurred' meaning 'whispered seductively in my ear for the 100th time') while using GNU-screen, so I'll make that my example. However this is a much more general question about user interfaces and what I perceive as a flawmissing feature in every implementation I've yet seen. I'm wondering if there is some way to create a heirarchy/tree of terminals in a screen session. EG I'd like to have something like 1 bash 1.1 bash 1.2 bash 2 bash 3 bash 3.1 bash 3.1.1 bash 3.1.2 bash It would be good if the terminals could be labelled instead of having to be navigated to via some arrangement that I suspect doesn't exist. So then you could jump to one using eg ^A:goto happydays or ^A:goto dykstra.angry. So to generalize that: Firefox, Chrome, Internet Explorer, gnome-terminal, roxterm, konsole, yakuake, OpenOffice, Microsoft Office, Mr. Snuffaluppagus's Funtime Carousel™, and Your Mom's Jam Browser™ all offer the ability to create a flat set of tabs containing documents of an identical nature: web pages, terminals, documents, fun rideable animals, and jams. GNU-screen implements the same functionality without using tabs. Linux and OS/X window managers provide the ability to organize windows into an array of workspaces, which amounts to again, the same deal. Over the past few years, this has become a more or less ubiquitous concept which has been righteously welcomed into the far reaches of the computer interface funfest. Heavy users of these systems quickly encounter a problem with it: the set of entities is flat. In the case of workspaces, an option may be available to create a 2d array. However none of these applications furnish their users with the ability to create heirarchies, similar to filesystem directory structures, containing instances of their particular contained type. I for one am consistently bothered by this, and am wondering if the community can offer some wisdom as to why this has not happened in any of the foremost collections of computational functionality our culture has yet produced. Or if perhaps it has and I'm just an ignorant savage. I'd like to be able to not only group things into a tree structure, but also to create references (aka symbolic links, aka pointers) from one part of the structure to another, as well as apply properties (eg default directory, colorscheme, ...) recursively downward from a given node. I see no reason why we shouldn't be able to save these structures as known sessions, and apply tags to particular instances. So then you can sort through them by tag, find them by name, or just use the arrow keys (with an appropriate modifier) to move left or right and in or out of a given level. Another key combo would serve to create a branch in the place of the current terminal/webpage/lifelike statue/spreadsheet/spreadsheet sheet/presentation/jam and move that entity into the new branch, then create a fresh one as a sibling to it: a second leaf node within the same branch node. They would get along well. I find it a bit astonishing that this hasn't happened yet, and the only reason I can venture as a guess is that the creators of these fine systems do not consider such functionality to be useful to a significant portion of their userbase. I posit that the probability that that such an assumption would be correct is pretty low. On the other hand, given the relative ease with which such structures can be implemented using modern libraries/languages, it doesn't seem likely that difficulty of implementation would be a major roadblock. If it could be done in 1972 or whenever within the constraints of a filesystem driver, it should be relatively painless to implement in 2010 in a fullblown application. Given that all of these systems are capable of maintaining a set of equivalent entities, it seems unlikely that a major infrastructure overhaul would be necessary in order to enable a navigable heirarchy of them. </rant Mostly I'm just looking to start up a discussion and/or brainstorming on this topic. Any ideas, examples, criticism, or analysis are quite welcome. * Mr. Snuffaluppagus's Funtime Carousel is a registered trademark of Children's Television Workshop Inc. * Your Mom's Jam Browser is a registered trademark of Your Mom Inc.

    Read the article

  • Graphics card initialisation problems when booting - requires a "double" boot

    - by DMA57361
    Problem Outline When booting from cold (and my machine is disconnected from main power when off, but leaving it connected doesn't help) the graphics card (single PCI-e card GeForce 460) will not initialise on the first boot, leaving me with the motherboards on-board graphics (which kick in automatically if no PCI-e card is found). However, if I restart the computer - normally I do this by powering it off just after the numlock lights up on the keyboard (ie, just after POST/BIOS and before Windows takes over), wait for the system to whirr down, and power up again - the graphics card will work correctly. Once double-booted in this matter the system seems to work correctly - with no noticeable problems. This is reproducible every time I try to boot - it has been working like this for about a month now. Background Information Sept 2010 - I suffered a hardware malfunction (crashes in Windows and graphics corruption on BIOS screens). By way of spare hardware I determined that replacing the PSU removed the issue, so I replaced the PSU with a brand new one of slightly higher power (460W replaced with 500W). Oct 2010 - The problem resurfaced. I purchased a new graphics card (GeForce 460), which removed the problem. The new graphics card immediately started having the boot initialisation problems mentioned. I presumed there was a motherboard fault all along, but because the system worked once booted, and I was temporarily out of spare money, I left the system alone and continued to use it. Early/Mid Dec 2010 - In the space of 5 days I recieved 3 instances of hard drive corruption (seemlingly fixed by chkdsk and sfc in each case...). Since I was already under the impression the motherboard was faulty, I purchased a new one ASAP, this also required new RAM (as I dropped from 4 slots to 2 and didn't want to drop mem quantity). Past 3-4 weeks - With a brand new PSU, Graphics Card, Motherboard and RAM I'm suffering the problem outlined above. So, what could be causing this and how do I can resolve it? Additional Notes Once double-booted the system seems to work entirely correctly. The graphics card problem has occured on two entirely different motherboards. I do not have the opportunity to test the graphics card in a different computer (I've only the old motherboard, which is dubious, or a really old desktop that still has an AGP port). Under load (ie, modern games for long enough for temperatures to plateau) the system remains stable and performs as expected. The software that came with the new motherboard and SpeenFan both report all voltages and temperatures are within nominal bounds, when idle and when under load. I've looking over the BIOS settings for my motherboard multiple times and can find nothing that helps. This system is configured to run with everything at standard levels - no overclocking. I've tried booting the system with only the mobo and graphics card connected (thinking maybe my new PSU was too weak for the new gfx card, even though it meets the quoted PSU requirements for the card) but the same problem persists (and really if the PSU was weak I'd have problems with the system under load). When the gfx card does not initialise the fan on its cooling unit is running, possibly slower than otherwise - but this measurement is by eye and so unreliable.

    Read the article

  • Lag spikes at full CPU usage, lagy mouse, maybe video card

    - by Roberts
    My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now 01.04.2012 (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can). Update 02.04.2012 - 7:21 Forgot one thing, my hard disk is extrimly slow and I will upgrade it this week or next week so I will be installing same OS again. I am multi-tasker but I can't do much because of 1.8 GHz CPU and slow hard drive (Model ID - WDC WD800JD-60JRC0). The Windows Experience Index is back to normal. Actually "Spelu grafika" (Gaming graphics) are higher than month ago. During this test mouse was very lagy, but month ago there weren't any problems. WHY!?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • The best dvd ripper software in 2014 review

    - by user328170
    The top 3 DVD Ripping Tools in 2014 Nowadays everyone may have several smart mobile devices, such as iphone, ipad air, ipad mini ,Samsung Galaxy and Sony Xperia. If you want to take your movies with your mobile devices, or sometimes just want to backup those classic physical discs on your notebook or workstation with high quality resolutions, you need a fast and stable software to rip them and convert them to the format you like. Fortunately, there are plenty of great software products designed to make the process easy and transform DVD to the files that are playable on any mobile device you choose. We have done a full review on dozens of products. Here are five of the best, based on our review. We test the software from its ripping speed, friendly use guide , reliability and ripping capability. The top one is still Winx DVD Ripper platinum. We've test its 6.1 version 2 years ago for its ability to quickly and easily rip DVDs and Blu-ray discs to high quality MKV files with a single click. It gave us deep impression in the test. This time we test it’s lastest 7.3.5 version. Besides easy use and speed, we test its capability to decrypt all kinds of discs with different protect method, for example, Disney X-project DRM , Sony ArccOS, RCE and region code. The result shows that winx dvd ripper platinum still maintain its advantages in all the area. Winx dvd ripper platinum is a more focused on DVD ripping software with the basic duty to rip and convert DVD. The color of UI is a modern technical sense. All the main functions are shown obviously while others specials are hidden for advanced users, making it more clear and convenient to make option. There are two company weisoft limited and Digiarty who can provide the software. Weisoft limited focus on USA, UK and Australia market. Digiarty focus on others. ripping speed ????? friendly use guide ????? reliability ????? ripping capability ????? The second one DVDFab DVDFab is also very robust during ripping dics. It can also decrypt most of the dics in the market. The shortage it still friendly use and speed. We'd note that the app is frequently updated to cut through the copy protection on even the latest DVDs and Blu-ray discs . The app is shareware, meaning most features are free, including decrypting and ripping to your hard drive. Many of you note that you use another app for compression and authoring, but many of you say they hey, storage is cheap, and the rips from DVDFab are easy, one-click, and work. ripping speed ??? friendly use guide ???? reliability ????? ripping capability ????? The Third one is Handbrake Handbrake is our favorite video encoder for a reason: it's simple, easy to use, easy to install, and offers a lots of options to get the high quality file as a result. If you're scared by them, you don't even have to use them—the app will compensate for you and pick some settings it thinks you'll like based on your destination device. So many of you like Handbrake that many of you use it in conjunction with another app (like VLC, which makes ripping easy)—you'll let another app do the rip and crack the DRM on your discs, and then process the file through Handbrake for encoding. The app is fast, can make the most of multi-core processors to speed up the process, and is completely open source. ripping speed ??? friendly use guide ???? reliability ???? ripping capability ????

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >