Search Results

Search found 936 results on 38 pages for 'demos'.

Page 38/38 | < Previous Page | 34 35 36 37 38 

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • Qt Linking Error.

    - by Wallah
    Hi, I configure qt-x11 with following options ./configure -prefix /iTalk/qtx11 -prefix-install -bindir /iTalk/qtx11-install/bin -libdir /iTalk/qtx11-install/lib -docdir /iTalk/qtx11-install/doc -headerdir /iTalk/qtx11-install/include -datadir /iTalk/qtx11-install/data -examplesdir /iTalk/qtx11-install/examples -demosdir /iTalk/qtx11-install/demos -debug. Now I am getting following errors in Fedora Core 6. Can you please tell me where the problem is? obj/debug-shared/qapplication_x11.o: In function `qt_init(QApplicationPrivate*, int, _XDisplay*, unsigned long, unsigned long)': /iTalk/QT4/qt/src/gui/kernel/qapplication_x11.cpp:1713: undefined reference to `FcInit' .obj/debug-shared/qfontdatabase.o: In function `queryFont': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1727: undefined reference to `FcFreeTypeQuery' .obj/debug-shared/qfontdatabase.o: In function `registerFont': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1959: undefined reference to `FcConfigGetCurrent' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1963: undefined reference to `FcConfigGetFonts' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1965: undefined reference to `FcConfigAppFontAddFile' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1966: undefined reference to `FcConfigGetFonts' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1985: undefined reference to `FcConfigGetBlanks' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1997: undefined reference to `FcPatternDel' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1998: undefined reference to `FcPatternAddString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2001: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2006: undefined reference to `FcFontSetAdd' .obj/debug-shared/qfontdatabase.o: In function `qt_FcPatternToQFontDef(_FcPattern*, QFontDef const&)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:746: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:751: undefined reference to `FcPatternGetDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:759: undefined reference to `FcPatternGetDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:771: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:776: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:786: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:793: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:800: undefined reference to `FcPatternGetInteger' .obj/debug-shared/qfontdatabase.o: In function `FcFontSetRemove': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1573: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontdatabase.o: In function `qt_fontSetForPattern(_FcPattern*, QFontDef const&)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1657: undefined reference to `FcFontSort' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1671: undefined reference to `FcPatternGetBool' .obj/debug-shared/qfontdatabase.o: In function `qt_addPatternProps(_FcPattern*, int, int, QFontDef const&)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1449: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1456: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1459: undefined reference to `FcPatternAddDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1464: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1468: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1471: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1476: undefined reference to `FcLangSetCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1477: undefined reference to `FcLangSetAdd' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1478: undefined reference to `FcPatternAddLangSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1479: undefined reference to `FcLangSetDestroy' .obj/debug-shared/qfontdatabase.o: In function `tryPatternLoad': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1588: undefined reference to `FcPatternDuplicate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1593: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1594: undefined reference to `FcDefaultSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1596: undefined reference to `FcFontMatch' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1606: undefined reference to `FcPatternDuplicate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1613: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1615: undefined reference to `FcCharSetHasChar' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1619: undefined reference to `FcPatternGetLangSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1621: undefined reference to `FcLangSetHasLang' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1628: undefined reference to `FcPatternDel' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1629: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1646: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1648: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontdatabase.o: In function `loadFontConfig': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1023: undefined reference to `FcObjectSetCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1024: undefined reference to `FcPatternCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1037: undefined reference to `FcObjectSetAdd' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1040: undefined reference to `FcFontList' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1041: undefined reference to `FcObjectSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1042: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1046: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1057: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1059: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1061: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1063: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1065: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1067: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1069: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1074: undefined reference to `FcPatternGetLangSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1081: undefined reference to `FcLangSetHasLang' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1100: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1107: undefined reference to `FcCharSetHasChar' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1116: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1136: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1153: undefined reference to `FcPatternGetDouble' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1161: undefined reference to `FcFontSetDestroy' .obj/debug-shared/qfontdatabase.o: In function `getFcPattern': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1494: undefined reference to `FcPatternCreate' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1509: undefined reference to `FcPatternAdd' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1516: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1524: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1531: undefined reference to `FcPatternAddInteger' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1533: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1535: undefined reference to `FcPatternAddBool' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1539: undefined reference to `FcDefaultSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1540: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1541: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1550: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1557: undefined reference to `FcPatternAddWeak' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1564: undefined reference to `FcPatternAddWeak' .obj/debug-shared/qfontdatabase.o: In function `loadFc': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1707: undefined reference to `FcFontSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1716: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:1718: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontdatabase.o: In function `QFontDatabase::removeAllApplicationFonts()': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2048: undefined reference to `FcConfigAppFontClear' .obj/debug-shared/qfontdatabase.o: In function `QFontDatabase::removeApplicationFont(int)': /iTalk/QT4/qt/src/gui/text/qfontdatabase_x11.cpp:2027: undefined reference to `FcConfigAppFontClear' .obj/debug-shared/qfontengine_x11.o: In function `qt_x11ft_convert_pattern(_FcPattern*, QByteArray*, int*, bool*)': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:970: undefined reference to `FcPatternGetString' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:972: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:975: undefined reference to `FcPatternGetBool' .obj/debug-shared/qfontengine_x11.o: In function `QFontEngineX11FT': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:999: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1016: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1041: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1077: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1106: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1112: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1113: undefined reference to `FcCharSetCopy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1115: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:999: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1016: undefined reference to `FcPatternGetInteger' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1041: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1077: undefined reference to `FcPatternGetBool' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1106: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1112: undefined reference to `FcPatternGetCharSet' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1113: undefined reference to `FcCharSetCopy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:1115: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontengine_x11.o: In function `engineForPattern': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:868: undefined reference to `FcFontMatch' .obj/debug-shared/qfontengine_x11.o: In function `QFontEngineMultiFT::loadEngine(int)': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:929: undefined reference to `FcPatternEqual' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:932: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:941: undefined reference to `FcPatternDuplicate' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:951: undefined reference to `FcConfigSubstitute' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:952: undefined reference to `FcDefaultSubstitute' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:956: undefined reference to `FcPatternDestroy' .obj/debug-shared/qfontengine_x11.o: In function `~QFontEngineMultiFT': /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:895: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:897: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:899: undefined reference to `FcFontSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:895: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:897: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:899: undefined reference to `FcFontSetDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:895: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:897: undefined reference to `FcPatternDestroy' /iTalk/QT4/qt/src/gui/text/qfontengine_x11.cpp:899: undefined reference to `FcFontSetDestroy' .obj/debug-shared/qfontengine_ft.o: In function `QFontEngineFT::stringToCMap(QChar const*, int, QGlyphLayout*, int*, QFlags) const': /iTalk/QT4/qt/src/gui/text/qfontengine_ft.cpp:1546: undefined reference to `FcCharSetHasChar' /iTalk/QT4/qt/src/gui/text/qfontengine_ft.cpp:1581: undefined reference to `FcCharSetHasChar' .obj/debug-shared/qfontengine_ft.o: In function `QFreetypeFace::release(QFontEngine::FaceId const&)': /iTalk/QT4/qt/src/gui/text/qfontengine_ft.cpp:308: undefined reference to `FcCharSetDestroy' collect2: ld returned 1 exit status make[1]: *** [../../lib/libQtGui.so.4.5.3] Error 1 make[1]: Leaving directory `/iTalk/QT4/qt/src/gui' make: *** [sub-gui-make_default-ordered] Error 2

    Read the article

  • Java Errors with jRuby on Rails on Google App Engine

    - by John Wang
    I followed all of the instructions so far from: http://code.google.com/p/appengine-jruby/wiki/RunningRails and http://gist.github.com/268192 Currently, I'm just trying to get to hello world. I'm getting these errors when I just run the dev_appserver.rb 238:hello-world jwang392$ dev_appserver.rb . => Booting DevAppServer => Press Ctrl-C to shutdown server => Generating configuration files 2010-04-08 09:16:51.961 java[411:1707] [Java CocoaComponent compatibility mode]: Enabled 2010-04-08 09:16:51.964 java[411:1707] [Java CocoaComponent compatibility mode]: Setting timeout for SWT to 0.100000 Apr 8, 2010 7:17:05 PM com.google.appengine.tools.development.ApiProxyLocalImpl log SEVERE: [1270754225387000] javax.servlet.ServletContext log: Warning: error application could not be initialized org.jruby.rack.RackInitializationException: no such file to load -- time from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25:in `boot!' from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/boot/rack.rb:10 from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1:in `load' from <script>:1 at org.jruby.rack.DefaultRackApplicationFactory $4.init(DefaultRackApplicationFactory.java:169) at org.jruby.rack.DefaultRackApplicationFactory.newErrorApplication(DefaultRac kApplicationFactory.java: 118) at org.jruby.rack.DefaultRackApplicationFactory.init(DefaultRackApplicationFac tory.java: 37) at org.jruby.rack.SharedRackApplicationFactory.init(SharedRackApplicationFacto ry.java: 26) at org.jruby.rack.RackServletContextListener.contextInitialized(RackServletCon textListener.java: 40) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java: 530) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java: 1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java: 448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 117) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 117) at org.mortbay.jetty.Server.doStart(Server.java:217) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at com.google.appengine.tools.development.JettyContainerService.startContainer (JettyContainerService.java: 188) at com.google.appengine.tools.development.AbstractContainerService.startup(Abs tractContainerService.java: 147) at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerI mpl.java: 219) at com.google.appengine.tools.development.DevAppServerMain $StartAction.apply(DevAppServerMain.java:162) at com.google.appengine.tools.util.Parser $ParseResult.applyArgs(Parser.java:48) at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServer Main.java: 113) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMa in.java: 89) Caused by: org.jruby.exceptions.RaiseException: no such file to load -- time at (unknown).new(file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25) at Kernel.require(file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25) at JRuby::Rack::Booter.boot!(file:/Users/jwang392/hello-world/WEB-INF/ lib/jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:10) at (unknown).(unknown)(file:/Users/jwang392/hello-world/WEB-INF/lib/ jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1) at (unknown).(unknown)(file:/Users/jwang392/Dhello-world/WEB-INF/lib/ jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1) at Kernel.load(<script>:1) at (unknown).(unknown)(:1) Apr 8, 2010 7:17:05 PM com.google.appengine.tools.development.ApiProxyLocalImpl log SEVERE: [1270754225913000] javax.servlet.ServletContext log: unable to create shared application instance org.jruby.rack.RackInitializationException: no such file to load -- time from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25:in `boot!' from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/boot/rack.rb:10 from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1:in `load' from <script>:1 at org.jruby.rack.DefaultRackApplicationFactory $4.init(DefaultRackApplicationFactory.java:169) at org.jruby.rack.DefaultRackApplicationFactory.getApplication(DefaultRackAppl icationFactory.java: 51) at org.jruby.rack.SharedRackApplicationFactory.init(SharedRackApplicationFacto ry.java: 27) at org.jruby.rack.RackServletContextListener.contextInitialized(RackServletCon textListener.java: 40) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java: 530) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java: 1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java: 448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 117) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 117) at org.mortbay.jetty.Server.doStart(Server.java:217) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at com.google.appengine.tools.development.JettyContainerService.startContainer (JettyContainerService.java: 188) at com.google.appengine.tools.development.AbstractContainerService.startup(Abs tractContainerService.java: 147) at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerI mpl.java: 219) at com.google.appengine.tools.development.DevAppServerMain $StartAction.apply(DevAppServerMain.java:162) at com.google.appengine.tools.util.Parser $ParseResult.applyArgs(Parser.java:48) at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServer Main.java: 113) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMa in.java: 89) Caused by: org.jruby.exceptions.RaiseException: no such file to load -- time at (unknown).new(file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25) at Kernel.require(file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25) at JRuby::Rack::Booter.boot!(file:/Users/jwang392/hello-world/WEB-INF/ lib/jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:10) at (unknown).(unknown)(file:/Users/jwang392/hello-world/WEB-INF/lib/ jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1) at (unknown).(unknown)(file:/Users/jwang392/hello-world/WEB-INF/lib/ jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1) at Kernel.load(<script>:1) at (unknown).(unknown)(:1) Apr 8, 2010 7:17:05 PM com.google.appengine.tools.development.ApiProxyLocalImpl log SEVERE: [1270754225915000] javax.servlet.ServletContext log: Error: application initialization failed org.jruby.rack.RackInitializationException: unable to create shared application instance at org.jruby.rack.SharedRackApplicationFactory.init(SharedRackApplicationFacto ry.java: 39) at org.jruby.rack.RackServletContextListener.contextInitialized(RackServletCon textListener.java: 40) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java: 530) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java: 1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java: 448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 117) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 117) at org.mortbay.jetty.Server.doStart(Server.java:217) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 40) at com.google.appengine.tools.development.JettyContainerService.startContainer (JettyContainerService.java: 188) at com.google.appengine.tools.development.AbstractContainerService.startup(Abs tractContainerService.java: 147) at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerI mpl.java: 219) at com.google.appengine.tools.development.DevAppServerMain $StartAction.apply(DevAppServerMain.java:162) at com.google.appengine.tools.util.Parser $ParseResult.applyArgs(Parser.java:48) at com.google.appengine.tools.development.DevAppServerMain.<init>(DevAppServer Main.java: 113) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMa in.java: 89) Caused by: org.jruby.rack.RackInitializationException: no such file to load -- time from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25:in `boot!' from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/boot/rack.rb:10 from file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1:in `load' from <script>:1 at org.jruby.rack.DefaultRackApplicationFactory $4.init(DefaultRackApplicationFactory.java:169) at org.jruby.rack.DefaultRackApplicationFactory.getApplication(DefaultRackAppl icationFactory.java: 51) at org.jruby.rack.SharedRackApplicationFactory.init(SharedRackApplicationFacto ry.java: 27) ... 19 more Caused by: org.jruby.exceptions.RaiseException: no such file to load -- time at (unknown).new(file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25) at Kernel.require(file:/Users/jwang392/hello-world/WEB-INF/lib/jruby- rack-0.9.6.jar!/jruby/rack/booter.rb:25) at JRuby::Rack::Booter.boot!(file:/Users/jwang392/hello-world/WEB-INF/ lib/jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:10) at (unknown).(unknown)(file:/Users/jwang392/hello-world/WEB-INF/lib/ jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1) at (unknown).(unknown)(file:/Users/jwang392/hello-world/WEB-INF/lib/ jruby-rack-0.9.6.jar!/jruby/rack/boot/rack.rb:1) at Kernel.load(<script>:1) at (unknown).(unknown)(:1) The server is running at http://localhost:8080/ I'm at a loss at what step I may have missed. I checked my ruby version (1.8.7) and rails version (2.3.5), gem version (1.3.6) and google-appengine-0.0.10.1 ran: sudo gem install google-appengine curl -O http://appengine-jruby.googlecode.com/hg/demos/rails2/rails2_appengine.rb ruby rails2_appengine.rb sudo gem install rails_dm_datastore sudo gem install activerecord-nulldb-adapter put this in my config.ru file: run lambda { |env| Rack::Response.new('Hello World!').finish } and finally ran $ dev_appserver.rb .

    Read the article

  • With a jquery modular dialog how do I stop the form values from persisting?

    - by stormist
    (Citing source at: http://jqueryui.com/demos/dialog/#modal-form) As an example, this works great but each time the form is subsequently opened the user entered values remain. How can I stop this behavior? (the form will be used multiple times on the same page. <style type="text/css"> body { font-size: 62.5%; } label, input { display:block; } input.text { margin-bottom:12px; width:95%; padding: .4em; } fieldset { padding:0; border:0; margin-top:25px; } h1 { font-size: 1.2em; margin: .6em 0; } div#users-contain { width: 350px; margin: 20px 0; } div#users-contain table { margin: 1em 0; border-collapse: collapse; width: 100%; } div#users-contain table td, div#users-contain table th { border: 1px solid #eee; padding: .6em 10px; text-align: left; } .ui-dialog .ui-state-error { padding: .3em; } .validateTips { border: 1px solid transparent; padding: 0.3em; } </style> <script type="text/javascript"> $(function() { // a workaround for a flaw in the demo system (http://dev.jqueryui.com/ticket/4375), ignore! $("#dialog").dialog("destroy"); var name = $("#name"), email = $("#email"), password = $("#password"), allFields = $([]).add(name).add(email).add(password), tips = $(".validateTips"); function updateTips(t) { tips .text(t) .addClass('ui-state-highlight'); setTimeout(function() { tips.removeClass('ui-state-highlight', 1500); }, 500); } function checkLength(o,n,min,max) { if ( o.val().length > max || o.val().length < min ) { o.addClass('ui-state-error'); updateTips("Length of " + n + " must be between "+min+" and "+max+"."); return false; } else { return true; } } function checkRegexp(o,regexp,n) { if ( !( regexp.test( o.val() ) ) ) { o.addClass('ui-state-error'); updateTips(n); return false; } else { return true; } } $("#dialog-form").dialog({ autoOpen: false, height: 300, width: 350, modal: true, buttons: { 'Create an account': function() { var bValid = true; allFields.removeClass('ui-state-error'); bValid = bValid && checkLength(name,"username",3,16); bValid = bValid && checkLength(email,"email",6,80); bValid = bValid && checkLength(password,"password",5,16); bValid = bValid && checkRegexp(name,/^[a-z]([0-9a-z_])+$/i,"Username may consist of a-z, 0-9, underscores, begin with a letter."); // From jquery.validate.js (by joern), contributed by Scott Gonzalez: http://projects.scottsplayground.com/email_address_validation/ bValid = bValid && checkRegexp(email,/^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|\.|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.?$/i,"eg. [email protected]"); bValid = bValid && checkRegexp(password,/^([0-9a-zA-Z])+$/,"Password field only allow : a-z 0-9"); if (bValid) { $('#users tbody').append('<tr>' + '<td>' + name.val() + '</td>' + '<td>' + email.val() + '</td>' + '<td>' + password.val() + '</td>' + '</tr>'); $(this).dialog('close'); } }, Cancel: function() { $(this).dialog('close'); } }, close: function() { allFields.val('').removeClass('ui-state-error'); } }); $('#create-user') .button() .click(function() { $('#dialog-form').dialog('open'); }); }); </script> <div class="demo"> <div id="dialog-form" title="Create new user"> <p class="validateTips">All form fields are required.</p> <form> <fieldset> <label for="name">Name</label> <input type="text" name="name" id="name" class="text ui-widget-content ui-corner-all" /> <label for="email">Email</label> <input type="text" name="email" id="email" value="" class="text ui-widget-content ui-corner-all" /> <label for="password">Password</label> <input type="password" name="password" id="password" value="" class="text ui-widget-content ui-corner-all" /> </fieldset> </form> </div> <div id="users-contain" class="ui-widget"> <h1>Existing Users:</h1> <table id="users" class="ui-widget ui-widget-content"> <thead> <tr class="ui-widget-header "> <th>Name</th> <th>Email</th> <th>Password</th> </tr> </thead> <tbody> <tr> <td>John Doe</td> <td>[email protected]</td> <td>johndoe1</td> </tr> </tbody> </table> </div> <button id="create-user">Create new user</button> </div><!-- End demo --> <div class="demo-description"> <p>Use a modal dialog to require that the user enter data during a multi-step process. Embed form markup in the content area, set the <code>modal</code> option to true, and specify primary and secondary user actions with the <code>buttons</code> option.</p> </div><!-- End demo-description -->

    Read the article

  • Impossible to do POSTs with appengine-jruby/RoR: Reflection is not allowed

    - by Joel Cuevas
    I'm trying to build a site with RoR on Google App Engine. I'm using the google-appengine gem (http://appengine-jruby.googlecode.com) and following the instructions in (http://gist.github.com/268192). The problem is that I can't submit ANY form! I've already tried this in two diferent clean Win 7 Pro envs and the result is the same. After install Ruby 1.8.6 (One-Click Installer): 1. gem update --system 2. gem install rails 3. gem install google-appengine 4. gem install rails_dm_datastore 5. gem install activerecord-nulldb-adapter 6. curl -O http://appengine-jruby.googlecode.com/hg/demos/rails2/rails2_appengine.rb 7. ruby rails2_appengine.rb (previously downloaded) 8. rails myproj 9. chmod myproj 10. ruby script/generate dd_model MyModel f1:string f2:float f3:float f4:float f5:integer f6:integer f7:integer -f 11. ruby script/generate scaffold MyModel f1:string f2:float f3:float f4:float f5:integer f6:integer f7:integer -f --skip-migration 12. dev_appserver.rb -p 3000 . At this point, I manually test the scaffold in (http://localhost:3000/my_models). The index is OK, then I create a new registry with the generated form, everything's fine, but when I try to create a second one, I get a "java.lang.RuntimeException: DummyDynamicScope should never be used for backref storage" in the console. As far as I read this is a won't-fix behavior in JRuby 1.4.1, but it's converted to a debug only warning in 1.5.0, so I proceed to install the pre release. 13. gem install appengine-jruby-jars --pre With this, that exception is solved and everything works great... until I move the project to the GAE server. 14. ruby appcfg.rb update . And now, in (http://myproj.appspot.com/my_models), again, the index is fine, also the new form, but in the moment that I submit it with valid data, I get a 500 error: "java.lang.IllegalAccessException: Reflection is not allowed on public int". As I said, this behavior is not present in the local SDK. In both cases, I'm completely unable to post anything. This is what I have right now in the GAE environment: Ruby version 1.8.7 (java) RubyGems disabled Rack version 1.1 Rails version 2.3.5 Action Pack version 2.3.5 Active Support version 2.3.5 DataMapper version 0.10.2 Environment production JRuby Runtime version 1.5.0.pre JRuby-Rack version 0.9.7 AppEngine SDK version Google App Engine/1.3.3 AppEngine APIs version 0.0.15 And this are my intalled gems: actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activerecord-nulldb-adapter (0.2.0) activeresource (2.3.5) activesupport (2.3.5) addressable (2.1.2) appengine-apis (0.0.15) appengine-jruby-jars (0.0.8.pre, 0.0.7) appengine-rack (0.0.8) appengine-sdk (1.3.3.1) appengine-tools (0.0.12) bundler08 (0.8.5) dm-appengine (0.0.8) dm-ar-finders (0.10.2) dm-core (0.10.2) dm-timestamps (0.10.2) dm-validations (0.10.2) extlib (0.9.14) fxri (0.3.7, 0.3.6) google-appengine (0.0.12) hpricot (0.8.2 x86-mswin32, 0.6 mswin32) jruby-rack (0.9.8, 0.9.7) log4r (1.1.7, 1.0.5) rack (1.1.0, 1.0.1) rails (2.3.5) rails_appengine (0.0.3) rails_dm_datastore (0.2.9) rake (0.8.7, 0.7.3) rubygems-update (1.3.7, 1.3.6) rubyzip (0.9.4) sources (0.0.1) win32-api (1.4.6 x86-mswin32-60, 1.0.4 mswin32) win32-clipboard (0.5.2, 0.4.3) win32-dir (0.3.6, 0.3.2) win32-eventlog (0.5.2, 0.4.6) win32-file (0.6.3, 0.5.4) win32-file-stat (1.3.4, 1.2.7) win32-process (0.6.2, 0.5.3) win32-sapi (0.1.5, 0.1.4) win32-sound (0.4.2, 0.4.1) windows-api (0.4.0, 0.2.0) windows-pr (1.0.9, 0.7.2) I'm unable to attach the full logs of the exceptions because of the character limits, but I can provide them under request. Here's an abstract of them: DummyDynamicScope (dev and prod envs): 14-may-2010 7:18:40 com.google.appengine.tools.development.ApiProxyLocalImpl log SEVERE: [1273821520195000] javax.servlet.ServletContext log: Application Error java.lang.RuntimeException: DummyDynamicScope should never be used for backref storage at org.jruby.runtime.scope.DummyDynamicScope.getBackRef(DummyDynamicScope.java:49) at org.jruby.RubyRegexp.updateBackRef(RubyRegexp.java:1404) at org.jruby.RubyRegexp.updateBackRef(RubyRegexp.java:1396) at org.jruby.RubyRegexp.search(RubyRegexp.java:1386) at org.jruby.RubyRegexp.op_match(RubyRegexp.java:1301) at org.jruby.RubyString.op_match(RubyString.java:1446) at org.jruby.RubyString$i_method_1_0$RUBYINVOKER$op_match.call(org/jruby/RubyString$i_method_1_0$RUBYINVOKER$op_match.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodOneOrN.call(JavaMethod.java:721) at org.jruby.RubyClass.finvoke(RubyClass.java:472) at org.jruby.RubyObject.send(RubyObject.java:1442) at org.jruby.RubyObject$i_method_multi$RUBYINVOKER$send.call(org/jruby/RubyObject$i_method_multi$RUBYINVOKER$send.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrNBlock.call(JavaMethod.java:276) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:330) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:189) at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with_comparison at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with_comparison at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:102) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:144) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:280) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:69) at org.jruby.ast.FCallManyArgsNode.interpret(FCallManyArgsNode.java:60) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:229) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:193) at org.jruby.RubyClass.finvoke(RubyClass.java:491) at org.jruby.RubyObject.send(RubyObject.java:1448) at org.jruby.RubyObject$i_method_multi$RUBYINVOKER$send.call(org/jruby/RubyObject$i_method_multi$RUBYINVOKER$send.gen) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrOneOrTwoOrThreeOrNBlock.call(JavaMethod.java:293) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:350) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:229) at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with28985350_50 at ruby.jit.ruby.C_3a_.Desarrollo.AppEngine.gorgory.WEB_minus_INF.lib.gems_dot_jar.bundler_gems.jruby.$1_dot_8.gems.dm_minus_validations_minus_0_dot_10_dot_2.lib.dm_minus_validations.validators.numeric_validator.validate_with28985350_50 at org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:221) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:201) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:227) at org.jruby.ast.FCallThreeArgNode.interpret(FCallThreeArgNode.java:40) Reflection (only prod env): Java::JavaLang::SecurityException (java.lang.IllegalAccessException: Reflection is not allowed on public int java.lang.String$CaseInsensitiveComparator.compare(java.lang.String,java.lang.String)): com.google.appengine.runtime.Request.process-92563a0605f433ea(Request.java) java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:40) org.jruby.javasupport.JavaMethod.<init>(JavaMethod.java:176) org.jruby.javasupport.JavaMethod.create(JavaMethod.java:183) org.jruby.java.invokers.MethodInvoker.createCallable(MethodInvoker.java:23) org.jruby.java.invokers.RubyToJavaInvoker.<init>(RubyToJavaInvoker.java:63) org.jruby.java.invokers.MethodInvoker.<init>(MethodInvoker.java:13) org.jruby.java.invokers.InstanceMethodInvoker.<init>(InstanceMethodInvoker.java:15) org.jruby.javasupport.JavaClass$InstanceMethodInvokerInstaller.install(JavaClass.java:339) org.jruby.javasupport.JavaClass.installClassMethods(JavaClass.java:723) org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:586) org.jruby.javasupport.Java.createProxyClass(Java.java:506) org.jruby.javasupport.Java.getProxyClass(Java.java:445) org.jruby.javasupport.Java.getInstance(Java.java:354) org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:143) org.jruby.javasupport.JavaClass$ConstantField.install(JavaClass.java:360) org.jruby.javasupport.JavaClass.installClassFields(JavaClass.java:711) org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:585) org.jruby.javasupport.Java.createProxyClass(Java.java:506) org.jruby.javasupport.Java.getProxyClass(Java.java:445) org.jruby.javasupport.Java.getProxyOrPackageUnderPackage(Java.java:885) org.jruby.javasupport.Java.get_proxy_or_package_under_package(Java.java:918) org.jruby.javasupport.JavaUtilities.get_proxy_or_package_under_package(JavaUtilities.java:54) org.jruby.javasupport.JavaUtilities$s_method_2_0$RUBYINVOKER$get_proxy_or_package_under_package.call(org/jruby/javasupport/JavaUtilities$s_method_2_0$RUBYINVOKER$get_proxy_or_package_under_package.gen:65535) org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:329) org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:188) org.jruby.ast.CallTwoArgNode.interpret(CallTwoArgNode.java:59) org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) org.jruby.ast.BlockNode.interpret(BlockNode.java:71) org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:113) org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:138) org.jruby.javasupport.util.RuntimeHelpers$MethodMissingMethod.call(RuntimeHelpers.java:389) org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:182) What should I do now? Any hint would be wellcome. Thanks!

    Read the article

  • Inverse Kinematics with OpenGL/Eigen3 : unstable jacobian pseudoinverse

    - by SigTerm
    I'm trying to implement simple inverse kinematics test using OpenGL, Eigen3 and "jacobian pseudoinverse" method. The system works fine using "jacobian transpose" algorithm, however, as soon as I attempt to use "pseudoinverse", joints become unstable and start jerking around (eventually they freeze completely - unless I use "jacobian transpose" fallback computation). I've investigated the issue and turns out that in some cases jacobian.inverse()*jacobian has zero determinant and cannot be inverted. However, I've seen other demos on the internet (youtube) that claim to use same method and they do not seem to have this problem. So I'm uncertain where is the cause of the issue. Code is attached below: *.h: struct Ik{ float targetAngle; float ikLength; VectorXf angles; Vector3f root, target; Vector3f jointPos(int ikIndex); size_t size() const; Vector3f getEndPos(int index, const VectorXf& vec); void resize(size_t size); void update(float t); void render(); Ik(): targetAngle(0), ikLength(10){ } }; *.cpp: size_t Ik::size() const{ return angles.rows(); } Vector3f Ik::getEndPos(int index, const VectorXf& vec){ Vector3f pos(0, 0, 0); while(true){ Eigen::Affine3f t; float radAngle = pi*vec[index]/180.0f; t = Eigen::AngleAxisf(radAngle, Vector3f(-1, 0, 0)) * Eigen::Translation3f(Vector3f(0, 0, ikLength)); pos = t * pos; if (index == 0) break; index--; } return pos; } void Ik::resize(size_t size){ angles.resize(size); angles.setZero(); } void drawMarker(Vector3f p){ glBegin(GL_LINES); glVertex3f(p[0]-1, p[1], p[2]); glVertex3f(p[0]+1, p[1], p[2]); glVertex3f(p[0], p[1]-1, p[2]); glVertex3f(p[0], p[1]+1, p[2]); glVertex3f(p[0], p[1], p[2]-1); glVertex3f(p[0], p[1], p[2]+1); glEnd(); } void drawIkArm(float length){ glBegin(GL_LINES); float f = 0.25f; glVertex3f(0, 0, length); glVertex3f(-f, -f, 0); glVertex3f(0, 0, length); glVertex3f(f, -f, 0); glVertex3f(0, 0, length); glVertex3f(f, f, 0); glVertex3f(0, 0, length); glVertex3f(-f, f, 0); glEnd(); glBegin(GL_LINE_LOOP); glVertex3f(f, f, 0); glVertex3f(-f, f, 0); glVertex3f(-f, -f, 0); glVertex3f(f, -f, 0); glEnd(); } void Ik::update(float t){ targetAngle += t * pi*2.0f/10.0f; while (t > pi*2.0f) t -= pi*2.0f; target << 0, 8 + 3*sinf(targetAngle), cosf(targetAngle)*4.0f+5.0f; Vector3f tmpTarget = target; Vector3f targetDiff = tmpTarget - root; float l = targetDiff.norm(); float maxLen = ikLength*(float)angles.size() - 0.01f; if (l > maxLen){ targetDiff *= maxLen/l; l = targetDiff.norm(); tmpTarget = root + targetDiff; } Vector3f endPos = getEndPos(size()-1, angles); Vector3f diff = tmpTarget - endPos; float maxAngle = 360.0f/(float)angles.size(); for(int loop = 0; loop < 1; loop++){ MatrixXf jacobian(diff.rows(), angles.rows()); jacobian.setZero(); float step = 1.0f; for (int i = 0; i < angles.size(); i++){ Vector3f curRoot = root; if (i) curRoot = getEndPos(i-1, angles); Vector3f axis(1, 0, 0); Vector3f n = endPos - curRoot; float l = n.norm(); if (l) n /= l; n = n.cross(axis); if (l) n *= l*step*pi/180.0f; //std::cout << n << "\n"; for (int j = 0; j < 3; j++) jacobian(j, i) = n[j]; } std::cout << jacobian << std::endl; MatrixXf jjt = jacobian.transpose()*jacobian; //std::cout << jjt << std::endl; float d = jjt.determinant(); MatrixXf invJ; float scale = 0.1f; if (!d /*|| true*/){ invJ = jacobian.transpose(); scale = 5.0f; std::cout << "fallback to jacobian transpose!\n"; } else{ invJ = jjt.inverse()*jacobian.transpose(); std::cout << "jacobian pseudo-inverse!\n"; } //std::cout << invJ << std::endl; VectorXf add = invJ*diff*step*scale; //std::cout << add << std::endl; float maxSpeed = 15.0f; for (int i = 0; i < add.size(); i++){ float& cur = add[i]; cur = std::max(-maxSpeed, std::min(maxSpeed, cur)); } angles += add; for (int i = 0; i < angles.size(); i++){ float& cur = angles[i]; if (i) cur = std::max(-maxAngle, std::min(maxAngle, cur)); } } } void Ik::render(){ glPushMatrix(); glTranslatef(root[0], root[1], root[2]); for (int i = 0; i < angles.size(); i++){ glRotatef(angles[i], -1, 0, 0); drawIkArm(ikLength); glTranslatef(0, 0, ikLength); } glPopMatrix(); drawMarker(target); for (int i = 0; i < angles.size(); i++) drawMarker(getEndPos(i, angles)); } Any help will be appreciated.

    Read the article

  • PHP statements, HTML and RSS

    - by poindexter
    Alrighty, I've got another little bit of code that I'm wrestling through. I'm building a conditional sidebar. The goal is to only show blog related stuff when posts in the "blog" category are being viewed. I've got part of it working, but the part where I'm trying to bring in an RSS feed of the category into the sidebar to show as recent posts. It doesn't work, and since I'm a php newb I'm not entirely sure why. Any suggestions or pointers are much appreciated. I'll post the problem section first, and then the entire php file second, so you all can see the context for the section that I'm having issues with. Problem Section: echo '<div class="panel iq-news">'; echo '<h4><span><a href="/category/blog/feed"><img src="/wp-content/themes/iq/images/rss-icon.gif" alt="Subscribe to our feed"/></a></span>IQNavigator Blog</h4>'; <?php query_posts('category_name=Blog&showposts=2'); if (have_posts()) : ?> echo '<ul>'; <?php while (have_posts()) : the_post(); ?> echo '<li><a href="<?php the_permalink();?>"><?php the_title();?> </a></li>'; <?php endwhile;?> echo '</ul>'; <?php endif;?> echo '<div class="twitter">'; echo '<p id="twitter-updates">'; <?php twitter_updates();?> echo '</p>'; echo '<p class="text-center"><a href="http://twitter.com/iqnavigator">Follow us on twitter</a></p>'; echo '</div>'; echo '</div>'; The whole darn long statement, for context reasons: <div class="sidebar"> <?php if (!is_search() && !is_page('Our Clients') && !is_archive()){ if($post->post_parent) { $children = wp_list_pages("title_li=&child_of=".$post->post_parent."&echo=0&depth=1&exclude=85,87,89,181,97,184"); } else { $children = wp_list_pages("title_li=&child_of=".$post->ID."&echo=0&depth=1&exclude=85,87,89,181,97,184"); } if ($children) { ?> <div class="panel links subnav"> <h3>In This Section</h3> <ul class="subnav"> <?php echo $children; ?> </ul> <p>&nbsp;</p> </div> <?php } } if(is_page('Our Clients') || in_category('Our Clients') || is_category('Our Clients')) { echo '<div class="panel links subnav">'; echo '<h3>In This Section</h3>'; echo '<ul class="subnav">'; wp_list_categories('child_of=21&title_li='); echo '</ul>'; echo '<p>&nbsp;</p>'; echo '</div>'; } else if (in_category('Blog')) { //PUT YOUR CODE HERE // echo get_page_content(34); echo '<div class="panel featured-resource">'; echo '<h4>Blog Contributors</h4>'; echo '<ul class"subnav">'; echo '<li><a href="/company/executive-team/john-f-martin/">John Martin</a></li>'; echo '<li><a href="/company/executive-team/kieran-brady/">Kieran Brady</a></li>'; echo '<li><a href="/company/executive-team/art-knapp/">Art Knapp</a></li>'; echo '</ul>'; echo '</div>'; echo '<div class="panel iq-news">'; echo '<h4><span><a href="/category/blog/feed"><img src="/wp-content/themes/iq/images/rss-icon.gif" alt="Subscribe to our feed"/></a></span>IQNavigator Blog</h4>'; <?php query_posts('category_name=Blog&showposts=2'); if (have_posts()) : ?> echo '<ul>'; <?php while (have_posts()) : the_post(); ?> echo '<li><a href="<?php the_permalink();?>"><?php the_title();?> </a></li>'; <?php endwhile;?> echo '</ul>'; <?php endif;?> echo '<div class="twitter">'; echo '<p id="twitter-updates">'; <?php twitter_updates();?> echo '</p>'; echo '<p class="text-center"><a href="http://twitter.com/iqnavigator">Follow us on twitter</a></p>'; echo '</div>'; echo '</div>'; //END CODE HERE } if (!is_page('Resources')) { ?> <div class="panel featured-resource"> <h4>Featured Resource</h4> <div class="embed"> <?php $custom_fields = get_post_custom(); $featured_video_code = $custom_fields['Featured Video Code']; if($featured_video_code) { foreach ( $featured_video_code as $key => $value ) { $the_code = $value; } $featured_video_link = $custom_fields['Featured Video Link']; foreach ( $featured_video_link as $key => $value ) { $the_link = $value; } $featured_video_text = $custom_fields['Featured Video Text']; foreach ( $featured_video_text as $key => $value ) { $the_text = $value; } if($the_code) { echo $the_code; } if($the_text) { echo '<ul>'; echo '<li>'; if($the_link) { echo '<a href="' . $the_link . '" class="video" target="_blank">' . $the_text . '</a>'; } else { echo $the_text; } echo '</li>'; echo '</ul>'; } } ?> + Visit Resource Center <div class="clr"></div> <div class="blue-bars"> <a href="<?php bloginfo('template_directory');?>/more-info.php" class="more-info" rel="facebox">Request More Info</a> <a href="<?php bloginfo('template_directory');?>/resource-form.php?id=701000000009E" class="view-demos" rel="facebox">Schedule a Demo</a> </div> </div> <div id="content">

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Conflict between two Javascripts (MailChimp validation etc. scripts & jQuery hSlides.js)

    - by Brian
    I have two scripts running on the same page, one is the jQuery.hSlides.js script http://www.jesuscarrera.info/demos/hslides/ and the other is a custom script that is used for MailChimp list signup integration. The hSlides panel can be seen in effect here: http://theatricalbellydance.com. I've turned off the MailChimp script because it was conflicting with the hSlides script, causing it not to to fail completely (as seen here http://theatricalbellydance.com/home2/). Can someone tell me what could be done to the hSlides script to stop the conflict with the MailChimp script? The MailChimp Script var fnames = new Array(); var ftypes = new Array(); fnames[0] = 'EMAIL'; ftypes[0] = 'email'; fnames[3] = 'MMERGE3'; ftypes[3] = 'text'; fnames[1] = 'FNAME'; ftypes[1] = 'text'; fnames[2] = 'LNAME'; ftypes[2] = 'text'; fnames[4] = 'MMERGE4'; ftypes[4] = 'address'; fnames[6] = 'MMERGE6'; ftypes[6] = 'number'; fnames[9] = 'MMERGE9'; ftypes[9] = 'text'; fnames[5] = 'MMERGE5'; ftypes[5] = 'text'; fnames[7] = 'MMERGE7'; ftypes[7] = 'text'; fnames[8] = 'MMERGE8'; ftypes[8] = 'text'; fnames[10] = 'MMERGE10'; ftypes[10] = 'text'; fnames[11] = 'MMERGE11'; ftypes[11] = 'text'; fnames[12] = 'MMERGE12'; ftypes[12] = 'text'; var err_style = ''; try { err_style = mc_custom_error_style; } catch (e) { err_style = 'margin: 1em 0 0 0; padding: 1em 0.5em 0.5em 0.5em; background: rgb(255, 238, 238) none repeat scroll 0% 0%; font-weight: bold; float: left; z-index: 1; width: 80%; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial; color: rgb(255, 0, 0);'; } var mce_jQuery = jQuery.noConflict(); mce_jQuery(document).ready(function ($) { var options = { errorClass: 'mce_inline_error', errorElement: 'div', errorStyle: err_style, onkeyup: function () {}, onfocusout: function () {}, onblur: function () {} }; var mce_validator = mce_jQuery("#mc-embedded-subscribe-form").validate(options); options = { url: 'http://theatricalbellydance.us1.list-manage.com/subscribe/post-json?u=1d127e7630ced825cb1a8b5a9&id=9f12d2a6bb&c=?', type: 'GET', dataType: 'json', contentType: "application/json; charset=utf-8", beforeSubmit: function () { mce_jQuery('#mce_tmp_error_msg').remove(); mce_jQuery('.datefield', '#mc_embed_signup').each(function () { var txt = 'filled'; var fields = new Array(); var i = 0; mce_jQuery(':text', this).each(function () { fields[i] = this; i++; }); mce_jQuery(':hidden', this).each(function () { if (fields[0].value == 'MM' && fields[1].value == 'DD' && fields[2].value == 'YYYY') { this.value = ''; } else if (fields[0].value == '' && fields[1].value == '' && fields[2].value == '') { this.value = ''; } else { this.value = fields[0].value + '/' + fields[1].value + '/' + fields[2].value; } }); }); return mce_validator.form(); }, success: mce_success_cb }; mce_jQuery('#mc-embedded-subscribe-form').ajaxForm(options); }); function mce_success_cb(resp) { mce_jQuery('#mce-success-response').hide(); mce_jQuery('#mce-error-response').hide(); if (resp.result == "success") { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(resp.msg); mce_jQuery('#mc-embedded-subscribe-form').each(function () { this.reset(); }); } else { var index = -1; var msg; try { var parts = resp.msg.split(' - ', 2); if (parts[1] == undefined) { msg = resp.msg; } else { i = parseInt(parts[0]); if (i.toString() == parts[0]) { index = parts[0]; msg = parts[1]; } else { index = -1; msg = resp.msg; } } } catch (e) { index = -1; msg = resp.msg; } try { if (index == -1) { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } else { err_id = 'mce_tmp_error_msg'; html = '<div id="' + err_id + '" style="' + err_style + '"> ' + msg + '</div>'; var input_id = '#mc_embed_signup'; var f = mce_jQuery(input_id); if (ftypes[index] == 'address') { input_id = '#mce-' + fnames[index] + '-addr1'; f = mce_jQuery(input_id).parent().parent().get(0); } else if (ftypes[index] == 'date') { input_id = '#mce-' + fnames[index] + '-month'; f = mce_jQuery(input_id).parent().parent().get(0); } else { input_id = '#mce-' + fnames[index]; f = mce_jQuery().parent(input_id).get(0); } if (f) { mce_jQuery(f).append(html); mce_jQuery(input_id).focus(); } else { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } } } catch (e) { mce_jQuery('#mce-' + resp.result + '-response').show(); mce_jQuery('#mce-' + resp.result + '-response').html(msg); } } } The hslides script: /* * hSlides (1.0) // 2008.02.25 // <http://plugins.jquery.com/project/hslides> * * REQUIRES jQuery 1.2.3+ <http://jquery.com/> * * Copyright (c) 2008 TrafficBroker <http://www.trafficbroker.co.uk> * Licensed under GPL and MIT licenses * * hSlides is an horizontal accordion navigation, sliding the panels around to reveal one of interest. * * Sample Configuration: * // this is the minimum configuration needed * $('#accordion').hSlides({ * totalWidth: 730, * totalHeight: 140, * minPanelWidth: 87, * maxPanelWidth: 425 * }); * * Config Options: * // Required configuration * totalWidth: Total width of the accordion // default: 0 * totalHeight: Total height of the accordion // default: 0 * minPanelWidth: Minimum width of the panel (closed) // default: 0 * maxPanelWidth: Maximum width of the panel (opened) // default: 0 * // Optional configuration * midPanelWidth: Middle width of the panel (centered) // default: 0 * speed: Speed for the animation // default: 500 * easing: Easing effect for the animation. Other than 'swing' or 'linear' must be provided by plugin // default: 'swing' * sensitivity: Sensitivity threshold (must be 1 or higher) // default: 3 * interval: Milliseconds for onMouseOver polling interval // default: 100 * timeout: Milliseconds delay before onMouseOut // default: 300 * eventHandler: Event to open panels: click or hover. For the hover option requires hoverIntent plugin <http://cherne.net/brian/resources/jquery.hoverIntent.html> // default: 'click' * panelSelector: HTML element storing the panels // default: 'li' * activeClass: CSS class for the active panel // default: none * panelPositioning: Accordion panelPositioning: top -> first panel on the bottom and next on the top, other value -> first panel on the top and next to the bottom // default: 'top' * // Callback funtctions. Inside them, we can refer the panel with $(this). * onEnter: Funtion raised when the panel is activated. // default: none * onLeave: Funtion raised when the panel is deactivated. // default: none * * We can override the defaults with: * $.fn.hSlides.defaults.easing = 'easeOutCubic'; * * @param settings An object with configuration options * @author Jesus Carrera <[email protected]> */ (function($) { $.fn.hSlides = function(settings) { // override default configuration settings = $.extend({}, $.fn.hSlides.defaults, settings); // for each accordion return this.each(function(){ var wrapper = this; var panelLeft = 0; var panels = $(settings.panelSelector, wrapper); var panelPositioning = 1; if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).length - 1) * settings.minPanelWidth; panels = $(settings.panelSelector, wrapper).reverse(); panelPositioning = -1; } // necessary styles for the wrapper $(this).css('position', 'relative').css('overflow', 'hidden').css('width', settings.totalWidth).css('height', settings.totalHeight); // set the initial position of the panels var zIndex = 0; panels.each(function(){ // necessary styles for the panels $(this).css('position', 'absolute').css('left', panelLeft).css('zIndex', zIndex).css('height', settings.totalHeight).css('width', settings.maxPanelWidth); zIndex ++; // if this panel is the activated by default, set it as active and move the next (to show this one) if ($(this).hasClass(settings.activeClass)){ $.data($(this)[0], 'active', true); if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).index(this) + 1) * settings.minPanelWidth - settings.maxPanelWidth; }else{ panelLeft = panelLeft + settings.maxPanelWidth; } }else{ // check if we are centering and some panel is active // this is why we can't add/remove the active class in the callbacks: positioning the panels if we have one active if (settings.midPanelWidth && $(settings.panelSelector, wrapper).hasClass(settings.activeClass) == false){ panelLeft = panelLeft + settings.midPanelWidth * panelPositioning; }else{ panelLeft = panelLeft + settings.minPanelWidth * panelPositioning; } } }); // iterates through the panels setting the active and changing the position var movePanels = function(){ // index of the new active panel var activeIndex = $(settings.panelSelector, wrapper).index(this); // iterate all panels panels.each(function(){ // deactivate if is the active if ( $.data($(this)[0], 'active') == true ){ $.data($(this)[0], 'active', false); $(this).removeClass(settings.activeClass).each(settings.onLeave); } // set position of current panel var currentIndex = $(settings.panelSelector, wrapper).index(this); panelLeft = settings.minPanelWidth * currentIndex; // if the panel is next to the active, we need to add the opened width if ( (currentIndex * panelPositioning) > (activeIndex * panelPositioning)){ panelLeft = panelLeft + (settings.maxPanelWidth - settings.minPanelWidth) * panelPositioning; } // animate $(this).animate({left: panelLeft}, settings.speed, settings.easing); }); // activate the new active panel $.data($(this)[0], 'active', true); $(this).addClass(settings.activeClass).each(settings.onEnter); }; // center the panels if configured var centerPanels = function(){ var panelLeft = 0; if (settings.panelPositioning != 'top'){ panelLeft = ($(settings.panelSelector, wrapper).length - 1) * settings.minPanelWidth; } panels.each(function(){ $(this).removeClass(settings.activeClass).animate({left: panelLeft}, settings.speed, settings.easing); if ($.data($(this)[0], 'active') == true){ $.data($(this)[0], 'active', false); $(this).each(settings.onLeave); } panelLeft = panelLeft + settings.midPanelWidth * panelPositioning ; }); }; // event handling if(settings.eventHandler == 'click'){ $(settings.panelSelector, wrapper).click(movePanels); }else{ var configHoverPanel = { sensitivity: settings.sensitivity, interval: settings.interval, over: movePanels, timeout: settings.timeout, out: function() {} } var configHoverWrapper = { sensitivity: settings.sensitivity, interval: settings.interval, over: function() {}, timeout: settings.timeout, out: centerPanels } $(settings.panelSelector, wrapper).hoverIntent(configHoverPanel); if (settings.midPanelWidth != 0){ $(wrapper).hoverIntent(configHoverWrapper); } } }); }; // invert the order of the jQuery elements $.fn.reverse = function(){ return this.pushStack(this.get().reverse(), arguments); }; // default settings $.fn.hSlides.defaults = { totalWidth: 0, totalHeight: 0, minPanelWidth: 0, maxPanelWidth: 0, midPanelWidth: 0, speed: 500, easing: 'swing', sensitivity: 3, interval: 100, timeout: 300, eventHandler: 'click', panelSelector: 'li', activeClass: false, panelPositioning: 'top', onEnter: function() {}, onLeave: function() {} }; })(jQuery);

    Read the article

  • Auto complete from database using CodeIgniter (Active Record)

    - by Ralph David Abernathy
    I have a form on my website in which one is able to submit a cat. The form contains inputs such as "Name" and "Gender", but I am just trying to get the auto completion to work with the "Name" field. Here is what my jquery looks like : $(document).ready(function() { $( "#tags" ).autocomplete({ source: '/Anish/auto_cat' }); }); Here is what my model looks like: public function auto_cat($search_term) { $this->db->like('name', $search_term); $response = $this->db->get('anish_cats')->result_array(); // var_dump($response);die; return $response; } } Here is my controller: public function auto_cat(){ $search_term = $this->input->get('term'); $cats = $this->Anish_m->auto_cat($search_term); } And here is my view: <head> <link rel="stylesheet" href="http://code.jquery.com/ui/1.10.3/themes/smoothness/jquery-ui.css" /> <script src="http://code.jquery.com/jquery-1.9.1.js"></script> <script src="http://code.jquery.com/ui/1.10.3/jquery-ui.js"></script> <link rel="stylesheet" href="/resources/demos/style.css" /> </head> <h1>Anish's Page</h1> <form action="/Anish/create" method="POST"> <div class="ui-widget"> <label for="tags">Name</label><input id="tags" type="text" name="name"> </div> <div> <label>Age</label><input type="text" name="age"> </div> <div> <label>Gender</label><input type="text" name="gender"> </div> <div> <label>Species</label><input type="text" name="species"> </div> <div> <label>Eye Color</label><input type="text" name="eye_color"> </div> <div> <label>Color</label><input type="text" name="color"> </div> <div> <label>Description</label><input type="text" name="description"> </div> <div> <label>marital status</label><input type="text" name="marital_status"> </div> <br> <button type="submit" class="btn btn-block btn-primary span1">Add cat</button> </form> <br/><br/><br/><br/> <table class="table table-striped table-bordered table-hover"> <thead> <tr> <th>Name</th> <th>Gender</th> <th>Age</th> <th>Species</th> <th>Eye Color</th> <th>Color</th> <th>Description</th> <th>Marital Status</th> <th>Edit</th> <th>Delete</th> </tr> </thead> <tbody> <?php foreach ($cats as $cat):?> <tr> <td> <?php echo ($cat['name']);?><br/> </td> <td> <?php echo ($cat['gender']);?><br/> </td> <td> <?php echo ($cat['age']);?><br/> </td> <td> <?php echo ($cat['species']);?><br/> </td> <td> <?php echo ($cat['eye_color']);?><br/> </td> <td> <?php echo ($cat['color']);?><br/> </td> <td> <?php echo ($cat['description']);?><br/> </td> <td> <?php echo ($cat['marital_status']);?><br/> </td> <td> <form action="/Anish/edit" method="post"> <input type="hidden" value="<?php echo ($cat['id']);?>" name="Anish_id_edit"> <button class="btn btn-block btn-info">Edit</button> </form> </td> <td> <form action="/Anish/delete" method="post"> <input type="hidden" value="<?php echo ($cat['id']);?>" name="Anish_id"> <button class="btn btn-block btn-danger">Delete</button> </form> </td> </tr> <?php endforeach;?> </tbody> </table> I am stuck. In my console, I am able to see this output when I type the letter 'a' if I uncomment the var_dump in my model: array(4) { [0]=> array(9) { ["id"]=> string(2) "13" ["name"]=> string(5) "Anish" ["gender"]=> string(4) "Male" ["age"]=> string(2) "20" ["species"]=> string(3) "Cat" ["eye_color"]=> string(5) "Brown" ["color"]=> string(5) "Black" ["description"]=> string(7) "Awesome" ["marital_status"]=> string(1) "0" } [1]=> array(9) { ["id"]=> string(2) "16" ["name"]=> string(5) "Anish" ["gender"]=> string(2) "fe" ["age"]=> string(2) "23" ["species"]=> string(2) "fe" ["eye_color"]=> string(2) "fe" ["color"]=> string(2) "fe" ["description"]=> string(2) "fe" ["marital_status"]=> string(1) "1" } [2]=> array(9) { ["id"]=> string(2) "17" ["name"]=> string(1) "a" ["gender"]=> string(1) "a" ["age"]=> string(1) "4" ["species"]=> string(1) "a" ["eye_color"]=> string(1) "a" ["color"]=> string(1) "a" ["description"]=> string(1) "a" ["marital_status"]=> string(1) "0" } [3]=> array(9) { ["id"]=> string(2) "18" ["name"]=> string(4) "Matt" ["gender"]=> string(6) "Female" ["age"]=> string(2) "80" ["species"]=> string(6) "ferret" ["eye_color"]=> string(4) "blue" ["color"]=> string(4) "pink" ["description"]=> string(5) "Chill" ["marital_status"]=> string(1) "0" } }

    Read the article

< Previous Page | 34 35 36 37 38