Search Results

Search found 1366 results on 55 pages for 'complexity'.

Page 41/55 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Software and/(x)or Hardware Projects for Pre-School Kids

    - by haylem
    I offered to participate at my kid's pre-school for various activities (yes, I'm crazy like that), and one of them is to help them discover extra-curricular (big word for a pre-school, but by lack of a better one... :)) hobbies, which may or may not relate to a professional activity. At first I thought that it wouldn't be really easy to have pre-schoolers relate to programming or the internal workings of a computer system in general (and I'm more used to teaching middle-school to university-level students), but then I thought there must be a way. So I'm trying to figure out ways to introduce very young kids (3yo) to computer systems in a fun and preferably educational way. Of course, I don't expect them to start smashing the stack for fun and profit right away (or at least not voluntarily, though I could use the occasion for some toddler tests...), but I'm confident there must be ways to get them interested in both: using the systems, becoming curious about understanding what they do, interacting with the systems to modify them. I guess this setting is not really relevant after all, it's pretty much the same as if you were aiming to achieve the same for your own kids at home. Ideas Considering we're talking 3yo pre-schoolers here, and that at this age some kids are already quite confident using a mouse (some even a keyboard, if not for typing, at least to press some buttons they've come to associate with actions) while others have not yet had any interaction with computers of any kind, it needs to be: rather basic, demonstrated and played with in less then 5 or 10 minutes, doable in in groups or alone, scalable and extendable in complexity to accommodate their varying abilities. The obvious options are: basic smallish games to play with, interactive systems like LOGO, Kojo, Squeak and clones (possibly even simpler than that), or thngs like Lego Systems. I guess it can be a thing to reflect on both at the software and the hardware levels: it could be done with a desktop or laptop machine, a tablet, a smartphone (or a crap-phone, for that matter, as long as you can modify it), or even get down to building something from scratch (Raspberry Pi and Arduino being popular options at the moment). I can probably be in the form of games, funny visualizations (which are pretty much games) w/ Prototype, virtual worlds to explore. I also thought on the moment (and I hope this won't offend anyone) that some approaches to teaching pets could work (reward systems, haptic feedback and such things could quickly point a kid in the right direction to understanding how things work, in a similar fashion - I'm not suggesting to shock the kids!). Hmm, Is There an Actual Question in There? What type of systems do you think might be a good fit, both in terms of hardware and software? Do you have seen such systems, or have anything in mind to work on? Are you aware of some research in this domain, with tangible results? Any input is welcome. It's not that I don't see options: there are tons, but I have a harder time pinpointing a more concrete and definite type of project/activity, so I figure some have valuable ideas or existing ones. Note: I am not advocating that every kid should learn to program, be interested in computer systems, or that all of them in a class would even care enough to follow such an introduction with more than a blank stare. I don't buy into the "everybody would benefit from learning to program" thing. Wouldn't hurt, but not necessary in any way. But if I can walk out of there with a few of them having smiled using the thing (or heck, cried because others took them away from them), that'd be good enough. Related Questions I've seen and that seem to complement what I'm looking for, but not exactly for the same age groups or with the same goals: Teaching Programming to Kids Recommendations for teaching kids math concepts & skills for programming?

    Read the article

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Named query not known error trying to call a stored proc using Fluent NHibernate

    - by Hamman359
    I'm working on setting up NHibernate for a project and I have a few queries that, due to their complexity, we will be leaving as stored procedures. I'd like to be able to use NHibernate to call the sprocs, but have run into an error I can't figure out. Since I'm using Fluent NHibernate I'm using mixed mode mapping as recommended here. However, when I run the app I get a "Named query not known: AccountsGetSingle" exception and I can't figure out why. I think I might have a problem with my HBM mapping since I'm not very familiar with using them but I'm not sure. My NHibernate configuration code is: private ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005 .ConnectionString((conn => conn.FromConnectionStringWithKey("CIDB"))) .ShowSql()) .Mappings(m => { m.HbmMappings.AddFromAssemblyOf<Account>(); m.FluentMappings.AddFromAssemblyOf<Account>(); }) .BuildSessionFactory(); } My hbm.xml file is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="AccountsGetSingle"> <return alias="Account" class="Core, Account"></return> exec AccountsGetSingle </sql-query> </hibernate-mapping> And the code where I am calling the sproc looks like this: public Account Get() { return _conversation.Session .GetNamedQuery("AccountsGetSingle") .UniqueResult<Account>(); } Any thoughts or ideas would be appreciated. Thanks.

    Read the article

  • Ext.data.Store, Javascript Arrays and Ext.grid.ColumnModel

    - by Michael Wales
    I am using Ext.data.Store to call a PHP script which returns a JSON response with some metadata about fields that will be used in a query (unique name, table, field, and user-friendly title). I then loop through each of the Ext.data.Record objects, placing the data I need into an array (this_column), push that array onto the end of another array (columns), and eventually pass this to an Ext.grid.ColumnModel object. The problem I am having is - no matter which query I am testing against (I have a number of them, varying in size and complexity), the columns array always works as expected up to columns[15]. At columns[16], all indexes from that point and previous are filled with the value of columns[15]. This behavior continues until the loop reaches the end of the Ext.data.Store object, when the entire arrays consists of the same value. Here's some code: columns = []; this_column = []; var MetaData = Ext.data.Record.create([ {name: 'id'}, {name: 'table'}, {name: 'field'}, {name: 'title'} ]); // Query the server for metadata for the query we're about to run metaDataStore = new Ext.data.Store({ autoLoad: true, reader: new Ext.data.JsonReader({ totalProperty: 'results', root: 'fields', id: 'id' }, MetaData), proxy: new Ext.data.HttpProxy({ url: 'index.php/' + type + '/' + slug }), listeners: { 'load': function () { metaDataStore.each(function(r) { this_column['id'] = r.data['id']; this_column['header'] = r.data['title']; this_column['sortable'] = true; this_column['dataIndex'] = r.data['table'] + '.' + r.data['field']; // This display valid information, through the entire process console.info(this_column['id'] + ' : ' + this_column['header'] + ' : ' + this_column['sortable'] + ' : ' + this_column['dataIndex']); columns.push(this_column); }); // This goes nuts at columns[15] console.info(columns); gridColModel = new Ext.grid.ColumnModel({ columns: columns });

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • T4 vs CodeDom vs Oslo

    - by Ryan Riley
    In an application scaffolding project on which I'm working, I'm trying to decide whether to use Oslo, T4 or CodeDom for generating code. Our goals are to keep dependencies to a minimum and drive code generation for a domain driven design from user stories. The first step will be to create the tests from the user stories, but we want the domain experts to be able to write their stories in a variety of different media (e.g. custom app, Word, etc.) and still generate the tests from the stories. What I know so far: CodeDom requires .NET but can only output .NET class files (e.g. .cs, .vb). Level of difficulty is fairly high. T4 requires CodeDom and VS Standard+. Level of difficulty is fairly reasonable, especially with the T4 Toolbox. Oslo is very new. I have no idea of the dependencies, but I imagine you must be on at least .NET 3.5. I'm also not certain as to the code generation abilities or the complexity for adding new grammars. However, domain experts could probably write user stories in Intellipad quite easily. Also not sure about ease of converting stories in Word to an MGrammar. What are your thoughts, experiences, etc. with any of the above tools. We want to stick with Microsoft or open source tools.

    Read the article

  • Easy to use AutoHotkey/AutoIT alternatives for Linux

    - by Xeddy
    Hi all, I'm looking for recommendations for an easy-to-use GUI automation/macro platform for Linux. If you're familiar with AutoHotkey or AutoIT on Windows, then you know exactly the kind of features I need, with the level of complexity. If you aren't familiar, then here's a small code snippet of how easy it is to use AHK: InputBox, varInput, Please enter some random text... Run, notepad.exe WinWaitActive, Untitled - Notepad SendInput, %varInput% SendInput, !f{Up}{Enter}{Enter} WinWaitActive, Save SendInput, SomeRandomFile{Enter} MsgBox, Your text`, %varInput% has been saved using notepad! #n::Run, notepad.exe Now the above example, although a bit pointless, is a demo of the sort of functionality and simplicity I'm looking for. Here's an explanation for those who don't speak AHK: ----Start of Explanation of Code ---- Asks user to input some text and stores it in varInput Runs notepad.exe Waits till window exists and is active Sends the contents of varInput as a series of keystrokes Sends keystrokes to go to File - Exit Waits till the "Save" window is active Sends some more keystrokes Shows a Message Box with some text and the contents of a variable Registers a hotkey, Win+N, which when pressed executes notepad.exe ----End of Explanation---- So as you can understand, the features are quite obvious: Ability to easily simulate keyboard and mouse functions, read input, process and display output, execute programs, manipulate windows, register hotkeys, etc.. all being done without requiring any #includes, unnecessary brackets, class declarations etc. In short: Simple. Now I've played around a bit with Perl and Python, but its definitely no AHK. They're great for more advanced stuff, but surely, there has to be some tool out there for easy GUI automation? PS: I've already tried running AHK with Wine but sending keystrokes and hotkeys don't work.

    Read the article

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • Why is software quality so problematic?

    - by Yuval A
    Even when viewing the subject in the most objective way possible, it is clear that software, as a product, generally suffers from low quality. Take for example a house built from scratch. Usually, the house will function as it is supposed to. It will stand for many years to come, the roof will support heavy weather conditions, the doors and the windows will do their job, the foundations will not collapse even when the house is fully populated. Sure, minor problemsdo occur, like a leaking faucet or a bad paint job, but these are not critical. Software, on the other hand is much more susceptible to suffer from bad quality: unexpected crashes, erroneous behavior, miscellaneous bugs, etc. Sure, there are many software projects and products which show high quality and are very reliable. But lots of software products do not fall in this category. Take into consideration paradigms like TDD which its popularity is on the rise in the past few years. Why is this? Why do people have to fear that their software will not work or crash? (Do you walk into a house fearing its foundations will collapse?) Why is software - subjectively - so full of bugs? Possible reasons: Modern software engineering exists for only a few decades, a small time period compared to other forms of engineering/production. Software is very complicated with layers upon layers of complexity, integrating them all is not trivial. Software development is relatively easy to start with, anyone can write a simple program on his PC, which leads to amateur software leaking into the market. Tight budgets and timeframes do not allow complete and high quality development and extensive testing. How do you explain this issue, and do you see software quality advancing in the near future?

    Read the article

  • What are the reasons to use dos batch programs in Windows?

    - by DVK
    Question What would be a good (ideally, technical) reason to ever program some non-trivial task in dos batch language on a modern Windows system as opposed to downloading either PowerShell, or ActiveState Perl? To be more specific, I make the following two assumptions for the duration of this question: anyone technical enough to be able to write a medium-complexity batch script is technical enough to install either of the scripting interpreters. Neither of those two present enough of a learning curve for basic batch replacement tasks that said curve would outweigh the pain of doing any remotely-non-trivial task in batch. Notes "You need a batch program for autoexec.bat" is not a valid reason. Your autoexec.bat may consist of simply calling non-batch script. If you disagree with either of my 2 assumptions above, that's fine, and I may be wrong. But my question is specifically "assuming those 2 assumptions are correct, what would be the reason to still stick with batch?" If it makes it easier to suspend disbelief (in case you disagree with me), add in a 3rd assumption that the question is limited to people who already posess at least some modicum of PowerShell or Perl experience. To re-iterate - this is not meant to be a subjective question about how easy it is to learn PSh or ASPerl compared to doing advanced batch coding. That is a separate question that is too subjective to be bothered with in this post. Background: I used to do some fairly complicated batch programming back in the elder days, and remember batch as one of the worst possble programming languages I had encountered. The idea for this question came after seeing a bunch of batch questions on SO, and trying to grok the answer of one of them out of sheer curiosity and giving up in pain after a minute, exclaiming mentally "why would anyone go through this pain instead of doing that in 1 line of Perl?" :) My own plausible answer I assume there may be an an likely DOS-compatible system, which has DOS interpreter but has no compatible PowerShell or Perl... I'm not aware of one but not completely impossible.

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • Enterprise SSO & Identity management / recommendations

    - by Maxim Veksler
    Hello Friends, We've discussed SSO before. I would like to re-enhance the conversation with defined requirements, taking into consideration recent new developments. In the past week I've been doing market research looking for answers to the following key issues: The project should should be: Requirements SSO solution for web applications. Integrates into existing developed products. has Policy based password security (Length, Complexity, Duration and co) Security Policy can be managed using a web interface. Customizable user interface (the password prompt and co. screens). Highly available (99.9%) Scalable. Runs on Red Hat Linux. Nice to have Contains user Groups & Roles. Written in Java. Free Software (open source) solution. None of the solutions came up so far are "killer choice" which leads me to think I will be tooling several projects (OWASP, AcegiSecurity + X??) hence this discussion. We are ISV delivering front-end & backend application suite. The frontend is broken into several modules which should act as autonomous unit, from client point of view he uses the "application" - which leads to this discussion regrading SSO. I would appreciate people sharing their experience & ideas regarding the appropriete solutions. Some solutions are interesting CAS Sun OpenSSO Enterprise JBoss Identity IDM JOSSO Tivoli Access Manager for Enterprise Single Sign-On Or more generally speaking this list Thank you, Maxim.

    Read the article

  • Building a Drupal Newsletter Module for handling Newsletter Articles

    - by Michael T. Smith
    We're building a module for generating HTML for email newsletters. We've looked into using a few other modules (SimpleNews, MailChimp, among others), but due to various requirements, it'll be easier and better for us to build a custom solution. Being a new Drupal developer, I'm a bit worried about handling this in a "non-Drupal" way. That being said, my plan is to setup a vocabulary with Newsletters as a term and the actual Newsletters as sub-terms, like so: Newsletters (term) - Newsletter A (sub-term) - Newsletter B (sub-term) This has the added benefit of being able to organize where articles were published (besides just on the site.) The question, though, is how to handle the different Newsletter issues. I could go another level deeper in the vocabulary, like so: Newsletters (term) - Newsletter A (sub-term) - Issue - 2010-03-01 - Issue - 2010-03-02 - Newsletter B (sub-term) - Issue - 2010-03-01 - Issue - 2010-03-08 But I'm wondering if this is adding a bit too much complexity. Once I have this taxonomy setup, when the user went to add new newsletters it would also create a node (content type: newsletter), and when he/she went to add new issues, it would also create a node (content type: issue.) Those would then be the landing pages for that content. So, the question is is there a better way for handling this structure? Is this a Drupal-like solution?

    Read the article

  • While porting a windows application to web, is it better to stick to conventional web technologies o

    - by Kabeer
    Hello. The web based application I am working on currently is a port from a windows application. This application is very data intensive. There are scores of modules and each of these modules have number of forms (data entry screens) and reports whereas the forms have many many fields and likewise the reports. I have been trying to identify the most suitable architecture for the presentation tier. There are many functions that are not very easily portable, for example printing (this too is very complex). For most of the others, I am planning to us "Ext JS" library which looks like capable of handling about 70% of complexity out of the box while for the remaining I would be custom coding or extending Ext JS. Having said that (sorry for being so descriptive), I wonder, if this is an Intranet application, why not port the entire application to SilverLight? While I am good at .Net, I'm somewhat alien to SilverLight. Considering I know my target audience and that the software will be used per seat license, would it be better to ride on SilverLight or is it better to stick to conventional web (XHTML, JS, CSS, etc)? Further, I have to support multiple devices in future and considering that SilverLight plug-ins for many devices are yet not out, would it be a risk?

    Read the article

  • How to get decent MySQL driver perfomance in Ruby

    - by Zombies
    I notice that I am getting very poor performance for either or both inserts and queries. The queries themselves are basic and can execute with no delay directly from mysql. The ruby script that I wrote is only 1 thread, so only 1 connection is being used, and never closed unless the script is terminated. Pretty basic, I am just trying to insert a lot of rows. There is a look-up or two to get a surrogate key, or to check for duplicates, but the complexity is just O(n). Also, it isn't like there are millions of records, so again the queries themselves take no time to run. I am using: Ruby 1.9.1 Gem/driver:ruby-mysql 2.9.2 MySQL 5.1.37-1ubuntu5.1 ^ all 32 bit versions on a 32bit ubuntu distro I am getting about 1-2 inserts per second, pretty slow. I know a lot of people will suggest to change drivers, but that means I have some refactoring and resting to do. So I would really appreciate any help, but please if you do recomend that at least say why you do (eg: if you have used ruby-mysql x.x.x before and found another mysql driver to be better).ruby-mysql 2.9.2 What I would like to know: How can I improve performance with ruby-mysql 2.9.2 If and only if I cannot do this with ruby-mysql 2.9.2, what should I do?

    Read the article

  • JMS equivalent in .Net

    - by rauts
    Hi All, I am trying to make an common abstract interface over the messaging infrastructure in our company. The design goal is to 2 fold. 1 is to hide the complexity of programming from the developers (i know its not very complex but still simplify it further) and 2 is to make the developers independent of the vendor specific messaging infrastructure (i.e. it can be MQSeries or EMS or MSMQ). The very common option is using the WCF layer over the messaging infrastructure. Use the MQSeries Custom channel for WCF or use EMS custom channle for WCF. But both are ruled out due to lack of proper version of MQSeries and EMS. Can someone please suggest what are the possible solutions to this problem. One which i can think of the to have a custom wrapper like JMS. Has anyone ever tried something similar before. Any help would be fantastic. by the way, i am trying to create this wrapper in C# 3.5. Regards

    Read the article

  • What is the most important thing you weren't taught in school?

    - by Alexandre Brisebois
    What is the most important thing you weren't taught in school? What topics are missing from the CS/IS education? Posted so far How to sell an idea Principles: Often, good enough is better than perfect. Making mistakes is actually a Good Thing™ -- as long as they're new mistakes. If a user can break your code they will. In the Real World™ they're all open-book exams Self confidence is way more important in getting ahead than intelligence. Always prefer simplicity over complexity. The best code is the code that you don't write. You never know when you'll meet someone again ... or where. It's always worthwhile to treat people with respect and kindness. Be aware of what you don't know and don't be afraid to ask questions when you need to Missing knowledge: How to communicate effectively. Lack of source control Lack of Softskills experience How to productize code How to write secure code How to formulate problems How to self-measurement. To evaluate ones true competences and market worth. How to debug code How important is backup How to read code on a large scale (being able to adapt and build upon existing projects) Good Regular expressions comprehension How to teach others effectively TDD/Unit testing Critical thinking How to integrate different skills and languages in a single project

    Read the article

  • Business Logic Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The app has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others. Have in mind that a callback on SalesOrder is not enough, since it depends on where it is called (no field for that), depends on the class of the user, among other stuff not visible for the model, or in some cases it would take long for the model to process.

    Read the article

  • The subsets-sum problem and the solvability of NP-complete problems

    - by G.E.M.
    I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it: (defun subset-contains-sum (set sum) (let ((subsets) (new-subset) (new-sum)) (dolist (element set) (dolist (subset-sum subsets) (setf new-subset (cons element (car subset-sum))) (setf new-sum (+ element (cdr subset-sum))) (if (= new-sum sum) (return-from subset-contains-sum new-subset)) (setf subsets (cons (cons new-subset new-sum) subsets))) (setf subsets (cons (cons element element) subsets))))) "set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front. I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic. I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • Did you love the game Mouse Trap as a kid, or something similar? (Programmer Psychology) [closed]

    - by Robert Oschler
    When I was a kid I absolutely fell in love with games that had as a core feature, the need to understand interconnecting structures. My favorite of all time was Mouse Trap. For the younger crowd out there, this was a very cool board game where you built the mouse trap out of the included plastic pieces as you played, with the end goal to trigger the mouse trap. The fully assembled mouse trap was a Rube Goldberg style invention where one operation triggered the next and the next and so on, until the last step dropped a cage on a little plastic mouse. Sometimes when I'm programming and I'm reviewing a particularly complex interaction between components and objects, while tracking the flow path mentally, I say to myself "It's a Mouse Trap!" and I wonder if my early addiction to that game and others like it was portent to my becoming a programmer. Another realization I have sometimes when looking at my code is how daunted I feel at the share complexity involved, followed by a darker comedic amazement at my expectation that it will all come together and work. How about you? Did you find yourself drawn to games that at their heart featured interacting control paths when growing up? Robert.

    Read the article

  • Will JSON replace XML as a data format?

    - by 13ren
    When I first saw XML, I thought it was basically a representation of trees. Then I thought: the important thing isn't that it's a particularly good representation of trees, but that it is one that everyone agrees on. Just like ASCII. And once established, it's hard to displace due to network effects. The new alternative would have to be much better (maybe 10 times better) to displace it. Of course, ASCII has been (mostly) replaced by Unicode, for internationalization. According to google trends, XML has a x43 lead, but is declining - while JSON grows. Will JSON replace XML as a data format? (edited) for which tasks? for which programmers/industries? NOTES: S-expressions (from lisp) are another representation of trees, but which has not gained mainstream adoption. There are many, many other proposals, such as YAML and Protocol Buffers (for binary formats). I can see JSON dominating the space of communicating with client-side AJAX (AJAJ?), and this possibly could back-spread into other systems transitively. XML, being based on SGML, is better than JSON as a document format. I'm interested in XML as a data format. XML has an established ecosystem that JSON lacks, especially ways of defining formats (XML Schema) and transforming them (XSLT). XML also has many other standards, esp for web services - but their weight and complexity can arguably count against XML, and make people want a fresh start (similar to "web services" beginning as a fresh start over CORBA).

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >