Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 54/274 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Functional/nonfunctional requirements VS design ideas

    - by Nicholas Chow
    Problem domain Functional requirements defines what a system does. Non-Functional requirements defines quality attributes of what the system does as a whole.(performance, security, reliability, volume, useability, etc.) Constraints limits the design space, they restrict designers to certain types of solutions. Solution domain Design ideas , defines how the system does it. For example a stakeholder need might be we want to increase our sales, therefore we must improve the usability of our webshop so more customers will purchase, a requirement can be written for this. (problem domain) Design takes this further into the solution domain by saying "therefore we want to offer credit card payments in addition to the current prepayment option". My problem is that the transition phase from requirement to design seems really vague, therefore when writing requirements I am often confused whether or not I incorporated design ideas in my requirements, that would make my requirement wrong. Another problem is that I often write functional requirements as what a system does, and then I also specify in what timeframe it must be done. But is this correct? Is it then a still a functional requirement or a non functional one? Is it better to seperate it into two distinct requirements? Here are a few requirements I wrote: FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers. FR1.4 The system shall display an error message containing: “Registration could not be completed” to the subscriber within 1 seconds after the system check of the registration form was unsuccessful. FR1.5 The system shall send a verification email containing a verification link to the subscriber within 30 seconds after the system check of the registration form was successful. FR1.6 The system shall add the newly registered Organizer to the user base within 5 seconds after the verification link was accessed. FR2 Organizer submits a Project FR2 describes the submission of a Project by an Organizer on CrowdFundum - FR2 The system shall display a submit Project form to the Organizer accounts on the website.< - FR2.3 The system shall check for completeness the Name of the Project, 1-3 Photo’s, Keywords of the Project, Punch line, Minimum and maximum amount of people, Funding threshold, One or more reward tiers, Schedule of when what will be organized, Budget plan, 300-800 Words of additional information about the Project, Contact details within 1 secondin after an Organizer submits the submit Project form. - FR2.8 The system shall add to the homepage in the new Projects category the Project link within 30 seconds after the system made a Project webpage - FR2.9 The system shall include in the Project link for the homepage : Name of the Project, 1 Photo, Punch line within 30 seconds after the system made a Project webpage. Questions: FR 1.1 : Have I incorporated a design idea here, would " the system shall have a registration form" be a better functional requirement? F1.2 ,2.3 : Is this not singular? Would the conditions be better written for each its own separate requirement FR 1.4: Is this a design idea? Is this a correct functional requirement or have I incorporated non functional(performance) in it? Would it be better if I written it like this: FR1 The system shall display an error message when check is unsuccessful. NFR: The system will respond to unsuccesful registration form checks within 1 seconds. Same question with FR 2.8 and 2.9. FR2.3: The system shall check for "completeness", is completeness here used ambigiously? Should I rephrase it? FR1.2: I added that the system shall require a "Captcha code" is this a functional requirement or does it belong to the "security aspect" of a non functional requirement. I am eagerly waiting for your response. Thanks!

    Read the article

  • SQLAuthority News – Speaking at Southeast Asia SharePoint Conference 2013 – Singapore

    - by pinaldave
    Two years ago I spoke at Southeast Asia SharePoint Conference 2011, Singapore and I had a fantastic time to present to the Singapore audience. The session was very well received and lots of interest was generated. The event is back again this year and with much bigger scale. I will be presenting on SQL Server and Sharepoint subject at the conference. Session Details: Title: Performance in 60 Seconds – Database Tricks Every SharePoint Developer & Admin MUST Know Abstract: SharePoint Developers and System Administrators often come across situations where they face a slow server response, even though their hardware specifications are above  par. This session is for all the SharePoint Developers who want their server to perform at blazing fast speed but want to invest very little time to make it happen. We will go over various database tricks which require absolutely no time to master and require practically no SQL coding at all. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. Date and Time: January 18, 20013 - 3:15 PM-4:15 PM Location: Max Atria is located at Singapore Expo, 1 Expo Drive, Singapore Tel 65 6403 2160 This session will cover lots of interesting tips and tricks about SQL Server and SharePoint co-exists together. I promise that every attendee will walk out with a trick which they can walk out of session and directly apply to their production server to improve its performance. The event is going to be again fantastic event – if you are in Singapore – you must not miss this event. If you are planning vacation – this is the right time to take days off and travel to Singapore for vacation. The event features over 30 sessions to choose from, focus on three areas of business gain: Exploring Information, Improving Productivity and Making it Work. This event has an excellent line up of international speakers (speakers traveling from the USA, Australia, New Zealand, Sri Lanka and India). Register early to reserve a spot at your choice of more than 30 classes taught by Microsoft Certified Masters, MVPs, and other top SharePoint experts! Here I have attempted to answer a few of the questions which every SharePoint professional half: Which sessions suit my skill level? Click here. What sessions are right for me? Click here. Which sessions are of my interests? Click here. Which sessions are on when? Click here. If you register by next Friday, 14, December – you can save $126 on the regular price of the conference. Prizes, Giveaways and … I love conference goodies – I collect them as a souvenir . This event is known for its generous prizes. The first 100 people to register on the day will get a SPECIAL gift at the event. Additionally there are exhibitor booth give away too. Here is the page listing all the prizes and giveaways. Do leave a comment or send me email if you are going to the event, we can sit together and have a coffee. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Mind Reading with the Raspberry Pi

    - by speakjava
    Mind Reading With The Raspberry Pi At JavaOne in San Francisco I did a session entitled "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi".  As part of this I showed some demonstrations of things I'd done using Java on the Raspberry Pi.  This is the first part of a series of blog entries that will cover all the different aspects of these demonstrations. A while ago I had bought a MindWave headset from Neurosky.  I was particularly interested to see how this worked as I had had the opportunity to visit Neurosky several years ago when they were still developing this technology.  At that time the 'headset' consisted of a headband (very much in the Bjorn Borg style) with a sensor attached and some wiring that clearly wasn't quite production ready.  The commercial version is very simple and easy to use: there are two sensors, one which rests on the skin of your forehead, the other is a small clip that attaches to your earlobe. Typical EEG sensors used in hospitals require lots of sensors and they all need copious amounts of conductive gel to ensure the electrical signals are picked up.  Part of Neurosky's innovation is the development of this simple dry-sensor technology.  Having put on the sensor and turned it on (it powers off a single AAA size battery) it collects data and transmits it to a USB dongle plugged into a PC, or in my case a Raspberry Pi. From a hacking perspective the USB dongle is ideal because it does not require any special drivers for any complex, low level USB communication.  Instead it appears as a simple serial device, which on the Raspberry Pi is accessed as /dev/ttyUSB0.  Neurosky have published details of the command protocol.  In addition, the MindSet protocol document, including sample code for parsing the data from the headset, can be found here. To get everything working on the Raspberry Pi using Java the first thing was to get serial communications going.  Back in the dim distant past there was the Java Comm API.  Sadly this has grown a bit dusty over the years, but there is a more modern open source project that provides compatible and enhanced functionality, RXTXComm.  This can be installed easily on the Pi using sudo apt-get install librxtx-java.  Next I wrote a library that would send commands to the MindWave headset via the serial port dongle and read back data being sent from the headset.  The design is pretty simple, I used an event based system so that code using the library could register listeners for different types of events from the headset.  You can download a complete NetBeans project for this here.  This includes javadoc API documentation that should make it obvious how to use it (incidentally, this will work on platforms other than Linux.  I've tested it on Windows without any issues, just by changing the device name to something like COM4). To test this I wrote a simple application that would connect to the headset and then print the attention and meditation values as they were received from the headset.  Again, you can download the NetBeans project for that here. Oracle recently released a developer preview of JavaFX on ARM which will run on the Raspberry Pi.  I thought it would be cool to write a graphical front end for the MindWave data that could take advantage of the built in charts of JavaFX.  Yet another NetBeans project is available here.  Screen shots of the app, which uses a very nice dial from the JFxtras project, are shown below. I probably should add labels for the EEG data so the user knows which is the low alpha, mid gamma waves and so on.  Given that I'm not a neurologist I suspect that it won't increase my understanding of what the (rather random looking) traces mean. In the next blog I'll explain how I connected a LEGO motor to the GPIO pins on the Raspberry Pi and then used my mind to control the motor!

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Going by the eBook

    - by Tony Davis
    The book and magazine publishing world is rapidly going digital, and the industry is faced with making drastic changes to their ways of doing business. The sudden take-up of digital readers by the book-buying public has surprised even the most technological-savvy of the industry. Printed books just aren't selling like they did. In contrast, eBooks are doing well. The ePub file format is the standard around which all publishers are converging. ePub is a standard for formatting book content, so that it can be reflowed for various devices, with their widely differing screen-sizes, and can be read offline. If you unzip an ePub file, you'll find familiar formats such as XML, XHTML and CSS. This is both a blessing and a curse. Whilst it is good to be able to use familiar technologies that have been developed to a level of considerable sophistication, it doesn't get us all the way to producing a viable publication. XHTML is a page-description language, not a book-description language, as we soon found out during our initial experiments, when trying to specify headers, footers, indexes and chaptering. As a result, it is difficult to predict how any particular eBook application will decide to render a book. There isn't even a consensus as to how the cover image is specified. All of this is awkward for the publisher. Each book must be created and revised in a form from which can be generated a whole range of 'printed media', from print books, to Mobi for kindles, ePub for most Tablets and SmartPhones, HTML for excerpted chapters on websites, and a plethora of other formats for other eBook readers, each with its own idiosyncrasies. In theory, if we can get our content into a clean, semantic XML form, such as DOCBOOKS, we can, from there, after every revision, perform a series of relatively simple XSLT transformations to output anything from a HTML article, to an ePub file for reading on an iPad, to an ICML file (an XML-based file format supported by the InDesign tool), ready for print publication. As always, however, the task looks bigger the closer you get to the detail. On the way to the utopian world of an XML-based book format that encompasses all the diverse requirements of the different publication media, ePub looks like a reasonable format to adopt. Its forthcoming support for HTML 5 and CSS 3, with ePub 3.0, means that features, such as widow-and-orphan controls, multi-column flow and multi-media graphics can be incorporated into eBooks. This starts to make it possible to build an "app-like" experience into the eBook and to free publishers to think of putting context before container; to think of what content is required, be it graphical, textual or audio, from the point of view of the user, rather than what's possible in a given, traditional book "Container". In the meantime, there is a gap between what publishers require and what current technology can provide and, of course building this app-like experience is far from plain sailing. Real portability between devices is still a big challenge, and achieving the sort of wizardry seen in the likes of Theodore Grey's "Elements" eBook will require some serious device-specific programming skills. Cheers, Tony.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • A Reusable Builder Class for .NET testing

    - by Liam McLennan
    When writing tests, other than end-to-end integration tests, we often need to construct test data objects. Of course this can be done using the class’s constructor and manually configuring the object, but to get many objects into a valid state soon becomes a large percentage of the testing effort. After many years of painstakingly creating builders for each of my domain objects I have finally become lazy enough to bother to write a generic, reusable builder class for .NET. To use it you instantiate a instance of the builder and configuring it with a builder method for each class you wish it to be able to build. The builder method should require no parameters and should return a new instance of the type in a default, valid state. In other words the builder method should be a Func<TypeToBeBuilt>. The best way to make this clear is with an example. In my application I have the following domain classes that I want to be able to use in my tests: public class Person { public string Name { get; set; } public int Age { get; set; } public bool IsAndroid { get; set; } } public class Building { public string Street { get; set; } public Person Manager { get; set; } } The builder for this domain is created like so: build = new Builder(); build.Configure(new Dictionary<Type, Func<object>> { {typeof(Building), () => new Building {Street = "Queen St", Manager = build.A<Person>()}}, {typeof(Person), () => new Person {Name = "Eugene", Age = 21}} }); Note how Building depends on Person, even though the person builder method is not defined yet. Now in a test I can retrieve a valid object from the builder: var person = build.A<Person>(); If I need a class in a customised state I can supply an Action<TypeToBeBuilt> to mutate the object post construction: var person = build.A<Person>(p => p.Age = 99); The power and efficiency of this approach becomes apparent when your tests require larger and more complex objects than Person and Building. When I get some time I intend to implement the same functionality in Javascript and Ruby. Here is the full source of the Builder class: public class Builder { private Dictionary<Type, Func<object>> defaults; public void Configure(Dictionary<Type, Func<object>> defaults) { this.defaults = defaults; } public T A<T>() { if (!defaults.ContainsKey(typeof(T))) throw new ArgumentException("No object of type " + typeof(T).Name + " has been configured with the builder."); T o = (T)defaults[typeof(T)](); return o; } public T A<T>(Action<T> customisation) { T o = A<T>(); customisation(o); return o; } }

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • Four Easy Ways to Save a Rocky CRM Relationship

    - by Divya Malik
     Today, I am pleased to introduce our guest blogger Luke Christianson. Luke is  an Application Sales rep based out of Minneapolis, MN.  You can find him on LinkedIn and follow him on Twitter. In any relationship, sooner or later, the excitement fades away.  The honeymoon period gives way to the old routines you had, before you committed to each other and you eventually begin doing things apart from one another.  I’m not talking about a marriage…  Well, I guess I am.Commitment to a CRM tool and building a deep and lasting relationship is not much different than the basics of a traditional love story.  After your controlled CRM pilot program, and maybe the National Sales Meeting where you couldn’t escape those three wonderful letters, CRM, you will soon find that if you haven’t designed an environment where it’s going to enable your reps to make more money, the relationship is doomed.   . If you’re currently in a dysfunctional CRM relationship, here are 4 simple tips to re-engaging users and getting that spark back. Shadow a Sales Rep:   Chances are you can find out exactly what is preventing your sales reps from using the application by simply watching how they go about their day.  Sales reps are driven by money, not by additional administrative duties.  Your system needs to be setup so that they can get the information they need quickly, facilitate making key updates and run their business out of one easy-to-use application.  Increase your sales team’s productivity by 5% automatically:    Cancel the weekly forecast calls with your reps and require them update their opportunities in CRM.  Something else that I’ve seen work extremely well, is when you do Monthly or Quarterly reviews, do not let your sales reps bring anything into the room with them; no spreadsheets, notebooks, or computers.  Everything they need to tell you should be able to be put into CRM and fully accessible by the Sales Manager at any time.  Tool time:      Make sure the tools that you have selected meet both your short-term goals and your long term goals.   You need tools that can adapt like your business does.  You probably can’t wait two months for an update to a picklist value or for the addition of a simple workflow rule.  Do you feel the tools that are in place can create the experience you want for your users? and finally, if all else fails... Keep It Simple, Stupid:     Do you really need to require 15 fields to create an Opportunity?  Do you need to clutter the interface with different reports that don’t add daily value?  Most CRM systems on the market today are flexible enough today that your admin could clean up most of the unnecessary interface ‘noise’ in a few hours.  If they're not, see #3. Every strong relationship can be tedious at times, you’ll fight and eventually make amends, you may even threaten to upgrade to a newer model…  But be patient and think about what you want to achieve and you’ll find a partner for life.

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • How to handle multi-processing of libraries which already spawn sub-processes?

    - by exhuma
    I am having some trouble coming up with a good solution to limit sub-processes in a script which uses a multi-processed library and the script itself is also multi-processed. Both, the library and script are modifiable by us. I believe the question is more about design than actual code, but for what it's worth, it's written in Python. The goal of the library is to hide implementation details of various internet routers. For that reason, the library has a "Proxy" factory method which takes the IP of a router as parameter. The factory then probes the device using a set of possible proxies. Usually, there is one proxy which immediately knows that is is able to send commands to this device. All others usually take some time to return (given a timeout). One thought was already to simply query the device for an identifier, and then select the proper proxy using that, but in order to do so, you would already need to know how to query the device. Abstracting this knowledge is one of the main purposes of the library, so that becomes a little bit of a "circular-requirement"/deadlock: To connect to a device, you need to know what proxy to use, and to know what proxy to create, you need to connect to a device. So probing the device is - as we can see - the best solution so far, apart from keeping a lookup-table somewhere. The library currently kills all remaining processes once a valid proxy has been found. And yes, there is always only one good proxy per device. Currently there are about 12 proxies. So if one create a proxy instance using the factory, 12 sub-processes are spawned. So far, this has been really useful and worked very well. But recently someone else wanted to use this library to "broadcast" a command to all devices. So he took the library, and wrote his own multi-processed script. This obviously spawned 12 * n processes where n is the number of IPs to which he broadcasted. This has given us two problems: The host on which the command was executed slowed down to a near halt. Aborting the script with CTRL+C ground the system to a total halt. Not even the hardware console responded anymore! This may be due to some Python strangeness which still needs to be investigated. Maybe related to http://bugs.python.org/issue8296 The big underlying question, is how to design a library which does multi-processing, so other applications which use this library and want to be multi-processed themselves do not run into system limitations. My first thought was to require a pool to be passed to the library, and execute all tasks in that pool. In that way, the person using the library has control over the usage of system resources. But my gut tells me that there must be a better solution. Disclaimer: My experience with multiprocessing is fairly limited. I have implemented a few straightforward which did not require access control to resources. So I have not yet any practical experience with semaphores or mutexes. p.s.: In the future, we may have enough information to do this without the probing. But the database which would contain the proper information is not yet operational. Also, the design about multiprocessing a multiprocessed library intrigues me :)

    Read the article

  • JavaFX Datagrid

    - by Chepech
    Hi All. Im in the verge of starting a new RIA development. We've been using Flex/Flash for the last 2 years but we were considering using a more OS approach so we though giving JavaFX a try since it seams the only solid option available. However after a couple of days of research we found out that there is not such thing as a datagrid for it, at least not in the core API. For those unfamiliar with Flex, a Datagrid is a component that allows you to display a collection of data in column-row layout (much like a HTML Table on steroids). The beauty of it is that you only need to worry about the data itself as the component does pretty much the rest (sorting, column dragging, etc). Im afraid to ask... but is there something slightly similar for JavaFX? We require nothing as fancy as Flex Datagrids/AdvancedDatagrids, we only require a easy, straight forward way to display grids of data that are able to have a little of interaction like clicking, sorting and that are able to display images, buttons, etc. without having to download a ton of different jars. If there isn´t something out there... This would be a shot in the back of the head to the idea of giving javaFx the chance to compete with flash on our project (which is sad). I really cant believe the SUN people didnt include something like this on the core API...

    Read the article

  • WebSocket handshake with Ruby and EM::WebSocket::Server

    - by Chad Johnson
    I am trying to create a simple WebSocket connection in JavaScript against my Rails app. I get the following: WebSocket connection to 'ws://localhost:4000/' failed: Error during WebSocket handshake: 'Sec-WebSocket-Accept' header is missing What am I doing wrong? Here is my code: JavaScript: var socket = new WebSocket('ws://localhost:4000'); socket.onopen = function() { var handshake = "GET / HTTP/1.1\n" + "Host: localhost\n" + "Upgrade: websocket\n" + "Connection: Upgrade\n" + "Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==\n" + "Sec-WebSocket-Protocol: quote\n" + "Sec-WebSocket-Version: 13\n" + "Origin: http://localhost\n"; socket.send(handshake); }; socket.onmessage = function(data) { console.log(data); }; Ruby: require 'rubygems' require 'em-websocket-server' module QuoteService class WebSocket < EventMachine::WebSocket::Server def on_connect handshake_response = "HTTP/1.1 101 Switching Protocols\n" handshake_response << "Upgrade: websocket\n" handshake_response << "Connection: Upgrade\n" handshake_response << "Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=\n" handshake_response << "Sec-WebSocket-Protocol: quote\n" send_message(handshake_response) end def on_receive(data) puts 'RECEIVED: ' + data end end end EventMachine.run do print 'Starting WebSocket server...' EventMachine.start_server '0.0.0.0', 4000, QuoteService::WebSocket puts 'running' end The handshake headers are per Wikipedia.

    Read the article

  • No OpenID endpoint found

    - by azamsharp
    I am trying to use the DotNetOpenId library to add OpenID support on a test website. For some reason it keeps giving me the following error when running on FireFix. Keep in mind that I am using localhost as I am testing it on my local machine. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using DotNetOpenAuth.OpenId.Extensions.ProviderAuthenticationPolicy; using DotNetOpenAuth.OpenId.Extensions.SimpleRegistration; using DotNetOpenAuth.OpenId.RelyingParty; namespace TableSorterDemo { public partial class Login : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { var openid = new OpenIdRelyingParty(); if (openid.GetResponse() != null) { switch (openid.GetResponse().Status) { case AuthenticationStatus.Authenticated: var fetch = openid.GetResponse().GetExtension(typeof(ClaimsResponse)) as ClaimsResponse; var nick = fetch.Nickname; var email = fetch.Email; break; } } } protected void OpenIdLogin1_LoggedIn(object sender, OpenIdEventArgs e) { var openid = new OpenIdRelyingParty(); if(openid.GetResponse() != null) { switch(openid.GetResponse().Status) { case AuthenticationStatus.Authenticated: var fetch = openid.GetResponse().GetExtension(typeof (ClaimsResponse)) as ClaimsResponse; var nick = fetch.Nickname; var email = fetch.Email; break; } } } protected void OpenIdLogin1_LoggingIn(object sender, OpenIdEventArgs e) { var openid = new OpenIdRelyingParty(); var req = openid.CreateRequest(OpenIdLogin1.Text); var fetch = new ClaimsRequest(); fetch.Email = DemandLevel.Require; fetch.Nickname = DemandLevel.Require; req.AddExtension(fetch); req.RedirectToProvider(); return; } } } Also, if I run the same page in Chrome then I get the following: Login failed: This message has already been processed. This could indicate a replay attack in progress.

    Read the article

  • how to use nokogiri methods .xpath & .at_xpath

    - by Radek
    I'm learning how to use nokogiri and few questions came to me based on the code below require 'rubygems' require 'mechanize' post_agent = WWW::Mechanize.new post_page = post_agent.get('http://www.vbulletin.org/forum/showthread.php?t=230708') puts "\nabsolute path with tbody gives nil" puts post_page.parser.xpath('/html/body/div/div/div/div/div/table/tbody/tr/td/div[2]').xpath('text()').to_s.strip.inspect puts "\n.at_xpath gives an empty string" puts post_page.parser.at_xpath("//div[@id='posts']/div/table/tr/td/div[2]").at_xpath('text()').to_s.strip.inspect puts "\ntwo lines solution with .at_xpath gives an empty string" rows = post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]") puts rows[0].at_xpath('text()').to_s.strip.inspect puts puts "two lines working code" rows = post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]") puts rows[0].xpath('text()').to_s.strip puts "\none line working code" puts post_page.parser.xpath("//div[@id='posts']/div/table/tr/td/div[2]")[0].xpath('text()').to_s.strip puts "\nanother one line code" puts post_page.parser.at_xpath("//div[@id='posts']/div/table/tr/td/div[2]").xpath('text()').to_s.strip puts "\none line code with full path" puts post_page.parser.xpath("/html/body/div/div/div/div/div/table/tr/td/div[2]")[0].xpath('text()').to_s.strip is it better to use // or / in xpath? @AnthonyWJones says that 'the use of an unprefixed //' is not so good idea I had to remove tbody from any working xpath otherwise I got 'nil' result. How is possible to remove an element from the xpath to get things work? do I have to use .xpath twice to extract data if not using full xpath? why I cannot make .at_xpath working to extract data? it works nicely here what is the difference?

    Read the article

  • Installing The ruby-gmail rubygem on Mac OS Snow Leopard

    - by johnnygoodman
    I'm working off these instructions: http://github.com/dcparker/ruby-gmail From the home directory I do a standard install and good stuff happens: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ sudo gem install ruby-gmail Successfully installed ruby-gmail-0.2.1 1 gem installed Installing ri documentation for ruby-gmail-0.2.1... Installing RDoc documentation for ruby-gmail-0.2.1... I head over to my ~/www dir where I run scripts that include other rubygems successfully and create a gmail directory. I create a script that includes rubygems and gmail, but does nothing else: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ pwd /Users/johnnygoodman/www/gmail Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ls test-send.rb Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ cat test-send.rb require 'rubygems' require 'gmail' I run this script and the errors begin: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ruby test-send.rb /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- mime/message (LoadError) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail/message.rb:1 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail.rb:168 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `require' from test-send.rb:2 Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ Here's my gem env: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - universal-darwin-10 - GEM PATHS: - /Library/Ruby/Gems/1.8 - /Users/johnnygoodman/.gem/ruby/1.8 - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com The path that the errors give when I run the script is not the same as the GEM PATHS given in the env output. However, I don't know how to make them match or if that's the significant thing here.

    Read the article

  • HTTP 500 ERROR on CAS Server while setting SSLVerifyClent as "required"

    - by Huiyu.Bird
    I have 3 servers, a Apache Server, a JBOSS Server and a CAS Server for SSO. The Apache Server resolve all request with a domain such as www.request.com, and the path of CAS Server is www.request.com/cas, and JBOSS Server is www.request.com/jboss (This app got a CAS client). My problem is if I set SSLVerifyClient require for the NameVirtualHost of www.request.com in my Apache Server, I got a HTTP 500 error during the redirecting to the JBOSS Server(http://www.request.com/jboss), after logined in the CAS login page successfully. But everything goes successfully if there is no SSLVerifyClient require . Error logs of my Apache Server : [Mon Apr 19 17:07:25 2010] [error] Re-negotiation handshake failed: Not accepted by client!? Error logs of my JBOSS Server : 2010-04-19 17:29:57,263 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/jboss].[jsp]] (ajp-0.0.0.0-8009-1) Servlet.service() for servlet jsp threw exception org.jasig.cas.client.validation.TicketValidationException: The CAS server returned no response. at org.jasig.cas.client.validation.AbstractUrlBasedTicketValidator.validate(AbstractUrlBasedTicketValidator.java:162) at org.jasig.cas.client.validation.AbstractTicketValidationFilter.doFilter(AbstractTicketValidationFilter.java:129) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jasig.cas.client.authentication.AuthenticationFilter.doFilter(AuthenticationFilter.java:103) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jasig.cas.client.session.SingleSignOutFilter.doFilter(SingleSignOutFilter.java:78) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:75) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.in Any tips will be highly appreciated. Thanks in advance.

    Read the article

  • Ruby JSON.pretty_generate ... is pretty unpretty

    - by Amy
    I can't seem to get JSON.pretty_generate() to actually generate pretty output in Rails. I'm using Rails 2.3.5 and it seems to automatically load the JSON gem. Awesome. While using script/console this does indeed produce JSON: some_data = {'foo' => 1, 'bar' => 20, 'cow' => [1, 2, 3, 4], 'moo' => {'dog' => 'woof', 'cat' => 'meow'}} some_data.to_json => "{\"cow\":[1,2,3,4],\"moo\":{\"cat\":\"meow\",\"dog\":\"woof\"},\"foo\":1,\"bar\":20}" But this doesn't produce pretty output: JSON.pretty_generate(some_data) => "{\"cow\":[1,2,3,4],\"moo\":{\"cat\":\"meow\",\"dog\":\"woof\"},\"foo\":1,\"bar\":20}" The only way I've found to generate it is to use irb and to load the Pure version: require 'rubygems' require 'json/pure' some_data = {'foo' => 1, 'bar' => 20, 'cow' => [1, 2, 3, 4], 'moo' => {'dog' => 'woof', 'cat' => 'meow'}} JSON.pretty_generate(some_data) => "{\n \"cow\": [\n 1,\n 2,\n 3,\n 4\n ],\n \"moo\": {\n \"cat\": \"meow\",\n \"dog\": \"woof\"\n },\n \"foo\": 1,\n \"bar\": 20\n}" BUT, what I really want is Rails to produce this. Does anyone have any tips why I can't get the generator in Rails to work correctly? Thanks! Updated 5:20 PM PST: Corrected the output.

    Read the article

  • Setting the Classpath and Accessing code from book: Programming Clojure

    - by user130153
    (I posted this same question on the Clojure list but haven't got an answer yet. Is anyone here ready to help?) I am going through Programming Clojure and I recently downloaded the code from the books official website. For other utils I can do, for example, (require 'clojure.contrib.str-utils) and it works. But how do I load code from the book? (require 'examples.introduction) throws the following exception: java.io.FileNotFoundException: Could not locate examples/ introduction__init.class or examples/introduction.clj on classpath: (NO_SOURCE_FILE:0) [Thrown class clojure.lang.Compiler$CompilerException] Here is the full backtrace: Backtrace: 0: clojure.lang.Compiler.eval(Compiler.java:4543) 1: clojure.core$eval__3990.invoke(core.clj:1728) 2: swank.commands.basic$eval_region__686.invoke(basic.clj:36) 3: swank.commands.basic$listener_eval__695.invoke(basic.clj:50) 4: clojure.lang.Var.invoke(Var.java:346) 5: user$eval__1200.invoke(NO_SOURCE_FILE) 6: clojure.lang.Compiler.eval(Compiler.java:4532) 7: clojure.core$eval__3990.invoke(core.clj:1728) 8: swank.core$eval_in_emacs_package__307.invoke(core.clj:55) 9: swank.core$eval_for_emacs__384.invoke(core.clj:123) 10: clojure.lang.Var.invoke(Var.java:354) 11: clojure.lang.AFn.applyToHelper(AFn.java:179) 12: clojure.lang.Var.applyTo(Var.java:463) 13: clojure.core$apply__3243.doInvoke(core.clj:390) 14: clojure.lang.RestFn.invoke(RestFn.java:428) 15: swank.core$eval_from_control__310.invoke(core.clj:62) 16: swank.core$eval_loop__313.invoke(core.clj:67) 17: swank.core$spawn_repl_thread__445$fn__476$fn__478.invoke(core.clj: 173) 18: clojure.lang.AFn.applyToHelper(AFn.java:171) 19: clojure.lang.AFn.applyTo(AFn.java:164) 20: clojure.core$apply__3243.doInvoke(core.clj:390) 21: clojure.lang.RestFn.invoke(RestFn.java:428) 22: swank.core$spawn_repl_thread__445$fn__476.doInvoke(core.clj:170) 23: clojure.lang.RestFn.invoke(RestFn.java:402) 24: clojure.lang.AFn.run(AFn.java:37) 25: java.lang.Thread.run(Unknown Source) I am trying both Clojure Box and Enclojure in NetBeans on Windows XP. Is it a classpath issue? Where should I place the folder that contains code from the book? Please help me out with my variable enviroment settings as well.

    Read the article

  • Problem with cucumber

    - by sev
    I want to make rails app which will require minimum gems. I freeze gems into app and try to run cucumber's test and I've got the an error. Below is sequence of my actions. What I do wrong? rails cucumber && cd cucumber rake rails:freeze:gems add at the end of config/environments/test.rb: config.gem 'gherkin' config.gem 'cucumber-rails' config.gem 'database_cleaner' config.gem 'webrat' rake gems:unpack:dependencies RAILS_ENV=test rake gems:build RAILS_ENV=test rake gems RAILS_ENV=test [F] gherkin [F] trollop = 1.16.2 [F] cucumber-rails [F] cucumber = 0.8.0 [F] gherkin = 1.0.30 [F] trollop = 1.16.2 [F] term-ansicolor = 1.0.4 [F] builder = 2.1.2 [F] diff-lcs = 1.1.2 [F] json_pure = 1.4.3 [F] database_cleaner [F] webrat [F] nokogiri = 1.2.0 [F] rack = 1.0 [F] rack-test = 0.5.3 [F] rack = 1.0 script/generate cucumber rake db:migrate gem uninstall builder cucumber cucumber-rails diff-lcs gherkin json_pure nokogiri rack-test term-ansicolor trollop webrat rake cucumber /usr/bin/ruby1.8 -I "cucumber/vendor/gems/cucumber-0.8.0/lib:lib" "cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber" --profile default /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- gherkin (LoadError) from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in require' from cucumber/vendor/gems/cucumber-0.8.0/bin/../lib/cucumber/cli/main.rb:5 from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5:inrequire' from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "cucumbe...] (See full trace by running task with --trace)

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • JavaScript tree / treegrid libraries

    - by desau
    I'm looking for a good JavaScript tree / treegrid package. Now -- before you answer: It needs to be able to perform well with lots of nodes. Perhaps 1,000 sibling nodes. It needs to be able to draw to a usable state within 2 or 3 seconds with 1,000 nodes. It doesn't necessarily need to draw all 1,000 nodes at once -- if it supports some sort of "smart rendering" or fake scrolling. Beyond that, column resizing, drag and drop, inline editing would all be nice, although I could probably add that functionality myself. I've already tried dojo's tree and yahoo's YUI treeview. Both are too slow.

    Read the article

  • Send and Receive JSON using RestClient and Sinatra

    - by lakshmanan
    Hi, I am trying to send a JSON data to a Sinatra app by RestClient ruby API. At client(client.rb) (using RestClient API) response = RestClient.post 'http://localhost:4567/solve', jdata, :content_type => :json, :accept => :json At server (Sinatra) require "rubygems" require "sinatra" post '/solve/:data' do jdata = params[:data] for_json = JSON.parse(jdata) end I get the following error /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient/abstract_response.rb:53:in `return!': Resource Not Found (RestClient::ResourceNotFound) from /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient/request.rb:193:in `process_result' from /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient/request.rb:142:in `transmit' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:543:in `start' from /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient/request.rb:139:in `transmit' from /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient/request.rb:56:in `execute' from /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient/request.rb:31:in `execute' from /Library/Ruby/Gems/1.8/gems/rest-client-1.5.1/lib/restclient.rb:72:in `post' from client.rb:52 All I want is to send JSON data and receive a JSON data back using RestClient and Sinatra..but whatever I try, I get the above error. I m stuck with this for 3 hours. Please help

    Read the article

  • Async Load JavaScript Files with Callback

    - by Gcoop
    Hi All, I am trying to write an ultra simple solution to load a bunch of JS files asynchronously. I have the following script below so far. However the callback is sometimes called when the scripts aren't actually loaded which causes a variable not found error. If I refresh the page sometimes it just works because I guess the files are coming straight from the cache and thus are there quicker than the callback is called, it's very strange? var Loader = function () { } Loader.prototype = { require: function (scripts, callback) { this.loadCount = 0; this.totalRequired = scripts.length; this.callback = callback; for (var i = 0; i < scripts.length; i++) { this.writeScript(scripts[i]); } }, loaded: function (evt) { this.loadCount++; if (this.loadCount == this.totalRequired && typeof this.callback == 'function') this.callback.call(); }, writeScript: function (src) { var self = this; var s = document.createElement('script'); s.type = "text/javascript"; s.async = true; s.src = src; s.addEventListener('load', function (e) { self.loaded(e); }, false); var head = document.getElementsByTagName('head')[0]; head.appendChild(s); } } Is there anyway to test that a JS file is completely loaded, without putting something in the actual JS file it's self, because I would like to use the same pattern to load libraries out of my control (GMaps etc). Invoking code, just before the tag. var l = new Loader(); l.require([ "ext2.js", "ext1.js"], function() { var config = new MSW.Config(); Refraction.Application().run(MSW.ViewMapper, config); console.log('All Scripts Loaded'); }); Thanks for any help.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >