Search Results

Search found 2379 results on 96 pages for 'dom traversal'.

Page 79/96 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Dart and NetBeans IDE 7.4

    - by Geertjan
    Here's the start of Dart in NetBeans IDE. Basic Dart editing support is done and on saving a Dart file the related JavaScript files are automatically generated. In the context of an HTML5 application in NetBeans IDE, that gives you deep integration with the embedded browser and, even better, Chrome, as well as Chrome Developer Tools. Below, notice that the "Sunflower Spectacular" H1 element is selected (click the image to enlarge it to get a better view), which is therefore highlighted in the live DOM view in the bottom left, as well as in the CSS Styles window in the top right, from where the CSS styles can be edited and from where the related files can be opened in the IDE. Identical features are available for Chrome, as well as on Android and iOS. And if you like that, watch this YouTube movie showing how Chrome Developer Tools integration can fit directly into the workflow below. Anyone want to help get this plugin further? What's needed: Much deeper Dart editing support, i.e., right now only very basic syntax coloring is provided, i.e., an ANTLR lexer is integrated into the NetBeans syntax coloring infrastructure. Parsing, error checking, code completion, and some small code templates are needed. A new panel is needed in the Project Properties dialog on NetBeans HTML5 projects for enabling Dart (i.e., similar to enabling Cordova), at which point the "dart.js" file and other Dart artifacts should be added to the project, so that a Dart project is immediately generated and the application should be immediately deployable. Whenever changes are made to a Dart file, Dart should run in the background to create the Dart artifacts in some hidden way, so that the user doesn't see all the Dart artifacts as is currently the case. Some way of recognizing Dart projects (there's a YAML file as an identifier) and creating NetBeans HTML5 projects from that, i.e., from Dart projects outside the IDE. I think that's all... The official Dart Editor is based on Eclipse and requires a massive download of heaps of Eclipse bundles. Compare that to the NetBeans equivalent, which is a very small "HTML5 and PHP" bundle (60 MB), available here, together with the above small Dart plugin. Plus, when you look at how NetBeans IDE integrates with a bunch of Google-oriented projects, i.e., Chrome, Chrome Developer Tools, and Android (via Cordova), that's a pretty interesting toolbox for anyone using Dart. And bear in mind that ANTLRWorks, Microchip, and heaps of other organizations have built and are building their tools on top of NetBeans!

    Read the article

  • More Fun with C# Iterators and Generators

    - by James Michael Hare
    In my last post, I talked quite a bit about iterators and how they can be really powerful tools for filtering a list of items down to a subset of items.  This had both pros and cons over returning a full collection, which, in summary, were:   Pros: If traversal is only partial, does not have to visit rest of collection. If evaluation method is costly, only incurs that cost on elements visited. Adds little to no garbage collection pressure.    Cons: Very slight performance impact if you know caller will always consume all items in collection. And as we saw in the last post, that con for the cost was very, very small and only really became evident on very tight loops consuming very large lists completely.    One of the key items to note, though, is the garbage!  In the traditional (return a new collection) method, if you have a 1,000,000 element collection, and wish to transform or filter it in some way, you have to allocate space for that copy of the collection.  That is, say you have a collection of 1,000,000 items and you want to double every item in the collection.  Well, that means you have to allocate a collection to hold those 1,000,000 items to return, which is a lot especially if you are just going to use it once and toss it.   Iterators, though, don't have this problem.  Each time you visit the node, it would return the doubled value of the node (in this example) and not allocate a second collection of 1,000,000 doubled items.  Do you see the distinction?  In both cases, we're consuming 1,000,000 items.  But in one case we pass back each doubled item which is just an int (for example's sake) on the stack and in the other case, we allocate a list containing 1,000,000 items which then must be garbage collected.   So iterators in C# are pretty cool, eh?  Well, here's one more thing a C# iterator can do that a traditional "return a new collection" transformation can't!   It can return **unbounded** collections!   I know, I know, that smells a lot like an infinite loop, eh?  Yes and no.  Basically, you're relying on the caller to put the bounds on the list, and as long as the caller doesn't you keep going.  Consider this example:   public static class Fibonacci {     // returns the infinite fibonacci sequence     public static IEnumerable<int> Sequence()     {         int iteration = 0;         int first = 1;         int second = 1;         int current = 0;         while (true)         {             if (iteration++ < 2)             {                 current = 1;             }             else             {                 current = first + second;                 second = first;                 first = current;             }             yield return current;         }     } }   Whoa, you say!  Yes, that's an infinite loop!  What the heck is going on there?  Yes, that was intentional.  Would it be better to have a fibonacci sequence that returns only a specific number of items?  Perhaps, but that wouldn't give you the power to defer the execution to the caller.   The beauty of this function is it is as infinite as the sequence itself!  The fibonacci sequence is unbounded, and so is this method.  It will continue to return fibonacci numbers for as long as you ask for them.  Now that's not something you can do with a traditional method that would return a collection of ints representing each number.  In that case you would eventually run out of memory as you got to higher and higher numbers.  This method, though, never runs out of memory.   Now, that said, you do have to know when you use it that it is an infinite collection and bound it appropriately.  Fortunately, Linq provides a lot of these extension methods for you!   Let's say you only want the first 10 fibonacci numbers:       foreach(var fib in Fibonacci.Sequence().Take(10))     {         Console.WriteLine(fib);     }   Or let's say you only want the fibonacci numbers that are less than 100:       foreach(var fib in Fibonacci.Sequence().TakeWhile(f => f < 100))     {         Console.WriteLine(fib);     }   So, you see, one of the nice things about iterators is their power to work with virtually any size (even infinite) collections without adding the garbage collection overhead of making new collections.    You can also do fun things like this to make a more "fluent" interface for for loops:   // A set of integer generator extension methods public static class IntExtensions {     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> Every(this int start)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; ++i)         {             yield return i;         }     }     // Begins counting to infinity by the given step value, use To() to     public static IEnumerable<int> Every(this int start, int byEvery)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; i += byEvery)         {             yield return i;         }     }     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> To(this int start, int end)     {         for (var i = start; i <= end; ++i)         {             yield return i;         }     }     // Ranges the count by specifying the upper range of the count.     public static IEnumerable<int> To(this IEnumerable<int> collection, int end)     {         return collection.TakeWhile(item => item <= end);     } }   Note that there are two versions of each method.  One that starts with an int and one that starts with an IEnumerable<int>.  This is to allow more power in chaining from either an existing collection or from an int.  This lets you do things like:   // count from 1 to 30 foreach(var i in 1.To(30)) {     Console.WriteLine(i); }     // count from 1 to 10 by 2s foreach(var i in 0.Every(2).To(10)) {     Console.WriteLine(i); }     // or, if you want an infinite sequence counting by 5s until something inside breaks you out... foreach(var i in 0.Every(5)) {     if (someCondition)     {         break;     }     ... }     Yes, those are kinda play functions and not particularly useful, but they show some of the power of generators and extension methods to form a fluid interface.   So what do you think?  What are some of your favorite generators and iterators?

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Using the jQuery UI Library in a MVC 3 Application to Build a Dialog Form

    - by ChrisD
    Using a simulated dialog window is a nice way to handle inline data editing. The jQuery UI has a UI widget for a dialog window that makes it easy to get up and running with it in your application. With the release of ASP.NET MVC 3, Microsoft included the jQuery UI scripts and files in the MVC 3 project templates for Visual Studio. With the release of the MVC 3 Tools Update, Microsoft implemented the inclusion of those with NuGet as packages. That means we can get up and running using the latest version of the jQuery UI with minimal effort. To the code! Another that might interested you about JQuery Mobile and ASP.NET MVC 3 with C#. If you are starting with a new MVC 3 application and have the Tools Update then you are a NuGet update and a <link> and <script> tag away from adding the jQuery UI to your project. If you are using an existing MVC project you can still get the jQuery UI library added to your project via NuGet and then add the link and script tags. Assuming that you have pulled down the latest version (at the time of this publish it was 1.8.13) you can add the following link and script tags to your <head> tag: < link href = "@Url.Content(" ~ / Content / themes / base / jquery . ui . all . css ")" rel = "Stylesheet" type = "text/css" /> < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > The jQuery UI library relies upon the CSS scripts and some image files to handle rendering of its widgets (you can choose a different theme or role your own if you like). Adding these to the stock _Layout.cshtml file results in the following markup: <!DOCTYPE html> < html > < head >     < meta charset = "utf-8" />     < title > @ViewBag.Title </ title >     < link href = "@Url.Content(" ~ / Content / Site . css ")" rel = "stylesheet" type = "text/css" />     <link href="@Url.Content("~/Content/themes/base/jquery.ui.all.css")" rel="Stylesheet" type="text/css" />     <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>     <script src="@Url.Content("~/Scripts/modernizr-1.7.min . js ")" type = "text/javascript" ></ script >     < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > </ head > < body >     @RenderBody() </ body > </ html > Our example will involve building a list of notes with an id, title and description. Each note can be edited and new notes can be added. The user will never have to leave the single page of notes to manage the note data. The add and edit forms will be delivered in a jQuery UI dialog widget and the note list content will get reloaded via an AJAX call after each change to the list. To begin, we need to craft a model and a data management class. We will do this so we can simulate data storage and get a feel for the workflow of the user experience. The first class named Note will have properties to represent our data model. namespace Website . Models {     public class Note     {         public int Id { get ; set ; }         public string Title { get ; set ; }         public string Body { get ; set ; }     } } The second class named NoteManager will be used to set up our simulated data storage and provide methods for querying and updating the data. We will take a look at the class content as a whole and then walk through each method after. using System . Collections . ObjectModel ; using System . Linq ; using System . Web ; namespace Website . Models {     public class NoteManager     {         public Collection < Note > Notes         {             get             {                 if ( HttpRuntime . Cache [ "Notes" ] == null )                     this . loadInitialData ();                 return ( Collection < Note >) HttpRuntime . Cache [ "Notes" ];             }         }         private void loadInitialData ()         {             var notes = new Collection < Note >();             notes . Add ( new Note                           {                               Id = 1 ,                               Title = "Set DVR for Sunday" ,                               Body = "Don't forget to record Game of Thrones!"                           });             notes . Add ( new Note                           {                               Id = 2 ,                               Title = "Read MVC article" ,                               Body = "Check out the new iwantmymvc.com post"                           });             notes . Add ( new Note                           {                               Id = 3 ,                               Title = "Pick up kid" ,                               Body = "Daughter out of school at 1:30pm on Thursday. Don't forget!"                           });             notes . Add ( new Note                           {                               Id = 4 ,                               Title = "Paint" ,                               Body = "Finish the 2nd coat in the bathroom"                           });             HttpRuntime . Cache [ "Notes" ] = notes ;         }         public Collection < Note > GetAll ()         {             return Notes ;         }         public Note GetById ( int id )         {             return Notes . Where ( i => i . Id == id ). FirstOrDefault ();         }         public int Save ( Note item )         {             if ( item . Id <= 0 )                 return saveAsNew ( item );             var existingNote = Notes . Where ( i => i . Id == item . Id ). FirstOrDefault ();             existingNote . Title = item . Title ;             existingNote . Body = item . Body ;             return existingNote . Id ;         }         private int saveAsNew ( Note item )         {             item . Id = Notes . Count + 1 ;             Notes . Add ( item );             return item . Id ;         }     } } The class has a property named Notes that is read only and handles instantiating a collection of Note objects in the runtime cache if it doesn't exist, and then returns the collection from the cache. This property is there to give us a simulated storage so that we didn't have to add a full blown database (beyond the scope of this post). The private method loadInitialData handles pre-filling the collection of Note objects with some initial data and stuffs them into the cache. Both of these chunks of code would be refactored out with a move to a real means of data storage. The GetAll and GetById methods access our simulated data storage to return all of our notes or a specific note by id. The Save method takes in a Note object, checks to see if it has an Id less than or equal to zero (we assume that an Id that is not greater than zero represents a note that is new) and if so, calls the private method saveAsNew . If the Note item sent in has an Id , the code finds that Note in the simulated storage, updates the Title and Description , and returns the Id value. The saveAsNew method sets the Id , adds it to the simulated storage, and returns the Id value. The increment of the Id is simulated here by getting the current count of the note collection and adding 1 to it. The setting of the Id is the only other chunk of code that would be refactored out when moving to a different data storage approach. With our model and data manager code in place we can turn our attention to the controller and views. We can do all of our work in a single controller. If we use a HomeController , we can add an action method named Index that will return our main view. An action method named List will get all of our Note objects from our manager and return a partial view. We will use some jQuery to make an AJAX call to that action method and update our main view with the partial view content returned. Since the jQuery AJAX call will cache the call to the content in Internet Explorer by default (a setting in jQuery), we will decorate the List, Create and Edit action methods with the OutputCache attribute and a duration of 0. This will send the no-cache flag back in the header of the content to the browser and jQuery will pick that up and not cache the AJAX call. The Create action method instantiates a new Note model object and returns a partial view, specifying the NoteForm.cshtml view file and passing in the model. The NoteForm view is used for the add and edit functionality. The Edit action method takes in the Id of the note to be edited, loads the Note model object based on that Id , and does the same return of the partial view as the Create method. The Save method takes in the posted Note object and sends it to the manager to save. It is decorated with the HttpPost attribute to ensure that it will only be available via a POST. It returns a Json object with a property named Success that can be used by the UX to verify everything went well (we won't use that in our example). Both the add and edit actions in the UX will post to the Save action method, allowing us to reduce the amount of unique jQuery we need to write in our view. The contents of the HomeController.cs file: using System . Web . Mvc ; using Website . Models ; namespace Website . Controllers {     public class HomeController : Controller     {         public ActionResult Index ()         {             return View ();         }         [ OutputCache ( Duration = 0 )]         public ActionResult List ()         {             var manager = new NoteManager ();             var model = manager . GetAll ();             return PartialView ( model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Create ()         {             var model = new Note ();             return PartialView ( "NoteForm" , model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Edit ( int id )         {             var manager = new NoteManager ();             var model = manager . GetById ( id );             return PartialView ( "NoteForm" , model );         }         [ HttpPost ]         public JsonResult Save ( Note note )         {             var manager = new NoteManager ();             var noteId = manager . Save ( note );             return Json ( new { Success = noteId > 0 });         }     } } The view for the note form, NoteForm.cshtml , looks like so: @model Website . Models . Note @using ( Html . BeginForm ( "Save" , "Home" , FormMethod . Post , new { id = "NoteForm" })) { @Html . Hidden ( "Id" ) < label class = "Title" >     < span > Title < /span><br / >     @Html . TextBox ( "Title" ) < /label> <label class="Body">     <span>Body</ span >< br />     @Html . TextArea ( "Body" ) < /label> } It is a strongly typed view for our Note model class. We give the <form> element an id attribute so that we can reference it via jQuery. The <label> and <span> tags give our UX some structure that we can style with some CSS. The List.cshtml view is used to render out a <ul> element with all of our notes. @model IEnumerable < Website . Models . Note > < ul class = "NotesList" >     @foreach ( var note in Model )     {     < li >         @note . Title < br />         @note . Body < br />         < span class = "EditLink ButtonLink" noteid = "@note.Id" > Edit < /span>     </ li >     } < /ul> This view is strongly typed as well. It includes a <span> tag that we will use as an edit button. We add a custom attribute named noteid to the <span> tag that we can use in our jQuery to identify the Id of the note object we want to edit. The view, Index.cshtml , contains a bit of html block structure and all of our jQuery logic code. @ {     ViewBag . Title = "Index" ; } < h2 > Notes < /h2> <div id="NoteListBlock"></ div > < span class = "AddLink ButtonLink" > Add New Note < /span> <div id="NoteDialog" title="" class="Hidden"></ div > < script type = "text/javascript" >     $ ( function () {         $ ( "#NoteDialog" ). dialog ({             autoOpen : false , width : 400 , height : 330 , modal : true ,             buttons : {                 "Save" : function () {                     $ . post ( "/Home/Save" ,                         $ ( "#NoteForm" ). serialize (),                         function () {                             $ ( "#NoteDialog" ). dialog ( "close" );                             LoadList ();                         });                 },                 Cancel : function () { $ ( this ). dialog ( "close" ); }             }         });         $ ( ".EditLink" ). live ( "click" , function () {             var id = $ ( this ). attr ( "noteid" );             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Edit Note" )                 . load ( "/Home/Edit/" + id , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         $ ( ".AddLink" ). click ( function () {             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Add Note" )                 . load ( "/Home/Create" , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         LoadList ();     });     function LoadList () {         $ ( "#NoteListBlock" ). load ( "/Home/List" );     } < /script> The <div> tag with the id attribute of "NoteListBlock" is used as a container target for the load of the partial view content of our List action method. It starts out empty and will get loaded with content via jQuery once the DOM is loaded. The <div> tag with the id attribute of "NoteDialog" is the element for our dialog widget. The jQuery UI library will use the title attribute for the text in the dialog widget top header bar. We start out with it empty here and will dynamically change the text via jQuery based on the request to either add or edit a note. This <div> tag is given a CSS class named "Hidden" that will set the display:none style on the element. Since our call to the jQuery UI method to make the element a dialog widget will occur in the jQuery document ready code block, the end user will see the <div> element rendered in their browser as the page renders and then it will hide after that jQuery call. Adding the display:hidden to the <div> element via CSS will ensure that it is never rendered until the user triggers the request to open the dialog. The jQuery document load block contains the setup for the dialog node, click event bindings for the edit and add links, and a call to a JavaScript function called LoadList that handles the AJAX call to the List action method. The .dialog() method is called on the "NoteDialog" <div> element and the options are set for the dialog widget. The buttons option defines 2 buttons and their click actions. The first is the "Save" button (the text in quotations is used as the text for the button) that will do an AJAX post to our Save action method and send the serialized form data from the note form (targeted with the id attribute "NoteForm"). Upon completion it will close the dialog widget and call the LoadList to update the UX without a redirect. The "Cancel" button simply closes the dialog widget. The .live() method handles binding a function to the "click" event on all elements with the CSS class named EditLink . We use the .live() method because it will catch and bind our function to elements even as the DOM changes. Since we will be constantly changing the note list as we add and edit we want to ensure that the edit links get wired up with click events. The function for the click event on the edit links gets the noteid attribute and stores it in a local variable. Then it clears out the HTML in the dialog element (to ensure a fresh start), calls the .dialog() method and sets the "title" option (this sets the title attribute value), and then calls the .load() AJAX method to hit our Edit action method and inject the returned content into the "NoteDialog" <div> element. Once the .load() method is complete it opens the dialog widget. The click event binding for the add link is similar to the edit, only we don't need to get the id value and we load the Create action method. This binding is done via the .click() method because it will only be bound on the initial load of the page. The add button will always exist. Finally, we toss in some CSS in the Content/Site.css file to style our form and the add/edit links. . ButtonLink { color : Blue ; cursor : pointer ; } . ButtonLink : hover { text - decoration : underline ; } . Hidden { display : none ; } #NoteForm label { display:block; margin-bottom:6px; } #NoteForm label > span { font-weight:bold; } #NoteForm input[type=text] { width:350px; } #NoteForm textarea { width:350px; height:80px; } With all of our code in place we can do an F5 and see our list of notes: If we click on an edit link we will get the dialog widget with the correct note data loaded: And if we click on the add new note link we will get the dialog widget with the empty form: The end result of our solution tree for our sample:

    Read the article

  • Opening cursor files in a graphics editor?

    - by sdaau
    I'm looking at /usr/share/icons/DMZ-White/cursors, and there is: $ tree -s /usr/share/icons/DMZ-White/ /usr/share/icons/DMZ-White/ +-- [ 4096] cursors ¦   +-- [ 14] 00008160000006810000408080010102 -> v_double_arrow ... ¦   +-- [ 5] 9d800788f1b08800ae810202380a0822 -> hand2 ¦   +-- [ 8] arrow -> left_ptr ¦   +-- [ 15776] bd_double_arrow ¦   +-- [ 15776] bottom_left_corner ¦   +-- [ 15776] bottom_right_corner ¦   +-- [ 15776] bottom_side ... ... a bunch of files without extension, that GIMP cannot open. Is there an editor where these files can be opened - or at least a converter to something like .png? I can note that ImageMagick display also failed to open these files... Found also Gursor Maker - Cursor Editor for X11/GTK+; got the CVS code from SourceForge - it still uses Numeric (the old name of numpy), so to run it, you'll have to do: #from Numeric import * from numpy import * ... in xcurio.py, curxp.py, gimp.py, colorfunc.py - and comment the #from xml.dom.ext.reader import Sax2 in lsproj.py. With that, I got it running 11.04: ... but cannot get any files to open? So I thought I should grep for paths, nothing much came up - and when I looked into cursordefs.py, I simply had to paste this: CURSOR_ICON = gtk.gdk.pixbuf_new_from_xpm_data([ "10 16 3 1", " c None", ". c #000000", "+ c #FFFFFF", ".. ", ".+. ", ".++. ", ".+++. ", ".++++. ", ".+++++. ", ".++++++. ", ".+++++++. ", ".++++++++.", ".+++++....", ".++.++. ", ".+. .++. ", ".. .++. ", " .++. ", " .++. ", " .. "]) Heh :) In any case, doesn't look like it will be much usable on newer Ubuntus, unfortunately... Just tested XMC plugin as well - on 11.04, has to be built from source (from the link in the accepted answer); the requirements on my system resolved to: sudo apt-get install libgimp2.0-dev libglib2.0-0-dbg libglib2.0-0-refdbg libglib2.0-cil-dev libgtk2.0-0-dbg libgtk2.0-cil-dev ... after that, the configure/make procedure in the INSTALL file works. Note that this plugin is a bit "sneaky": ... that is, you should use "All files" (as there are no extensions); cursor previews at first will not be rendered. Then open one cursor file; after it has been opened, then there is a preview in the File/Open dialog; but other than that, it works fine...

    Read the article

  • Visual WebGui's XAML based programming for web developers

    - by Webgui
    While ASP.NET provides an event base approach it is completely dismissed when working with AJAX and the richness of the server is lost and replaced with JavaScript programming and couple with a very high security risk. Visual WebGui reinstates the power of the server to AJAX development and provides a statefull yet scalable, server centric architecture that provides the benefits and user productivity of AJAX with the security and developer productivity we had before AJAX stormed into our lives. "When I first came up with the concept of Visual WebGui , I was frustrated by the fragile and complex nature of developing web applications. The contrast in productivity between working in a fully OOP compiled environment vs. scripting even today, with JQuery, Dojo and such, is still huge. Even today the greatest sponsor of JavaScript programming, Google, is offering a framework to avoid JavaScript using Java that compiles to JavaScript (GWT). So I decided to find a way to abstract the complexity or rather delegate the complex job to enable developers to concentrate on the “What” instead of the “How” and embraced the Form based approach," said Guy Peled the inventor of Visual WebGui. Although traditional OOP development still rules the enterprise, the differences between web sites and web applications have blurred and so did the differences between classic developers and web developers. As a result, we now see declarative languages in desktop / backend development environments (WPF / WF) and we see OOP, gaining more and more power in web development (ASP.NET MVC / ASP.NET DOM). However, what has not changed is enterprise need for security, development ROI, reach, highly responsive and interactive UIs and scalability. The advantages that declarative languages and 'on demand' compilation provide over classic development are mostly the flexibility and a more readable initialize component it offers which is what Gizmox is aspiring to do by replacing the designer initialize component with XAML code. The code in this new project template will be compiled on demand using the build provider mechanism ASP.NET has. This means that the performance hit is only on the first request and after that the performance is the same as a prebuilt solution. This will allow the flexibility of a dynamically updated sites and the power of fully blown enterprise applications over web. You can also use prebuilt features available in ASP.NET to enjoy both worlds in production. VWG XAML implementation (VWG Sites) will be the first truly compliable XAML implementation as Microsoft implemented Silverlight and WPF as a runtime markup interpretation opposed to the ASP.NET markup implementation which is compiled to CLR code once. We have chosen to implement the VWG Sites parser as a different way to create CLR code that provides greater performance over the reflection alternative. VWG Sites will also be the first server side XAML UI engine which, while giving the power of XAML, it will not require any plug-ins or installations on the client side. Short demo video of VWG Sites markup. There is also a live sample available here.

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • Is this a pattern? Should it be?

    - by Arkadiy
    The following is more of a statement than a question - it describes something that may be a pattern. The question is: is this a known pattern? Or, if it's not, should it be? I've had a situation where I had to iterate over two dissimilar multi-layer data structures and copy information from one to the other. Depending on particular use case, I had around eight different kinds of layers, combined in about eight different combinations: A-B-C B-C A-C D-E A-D-E and so on After a few unsuccessful attempts to factor out the repetition of per-layer iteration code, I realized that the key difficulty in this refactoring was the fact that the bottom level needed access to data gathered at higher levels. To explicitly accommodate this requirement, I introduced IterationContext class with a number of get() and set() methods for accumulating the necessary information. In the end, I had the following class structure: class Iterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const = 0; }; class RecursingIterator : public Iterator { RecursingIterator(const Iterator &below); }; class IterateOverA : public RecursingIterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // Iterate over members in dataStructure1 // locate corresponding item in dataStructure2 (passed via context) // and set it in the context // invoke the sub-iterator }; class IterateOverB : public RecursingIterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // iterate over members dataStructure2 (form context) // set dataStructure2's item in the context // locate corresponding item in dataStructure2 (passed via context) // invoke the sub-iterator }; void main() { class FinalCopy : public Iterator { virtual void iterateOver(const Structure &dataStructure1, IterationContext &ctx) const { // copy data from structure 1 to structure 2 in the context, // using some data from higher levels as needed } } IterationContext ctx(dateStructure2); IterateOverA(IterateOverB(FinalCopy())).iterate(dataStructure1, ctx); } It so happens that dataStructure1 is a uniform data structure, similar to XML DOM in that respect, while dataStructure2 is a legacy data structure made of various structs and arrays. This allows me to pass dataStructure1 outside of the context for convenience. In general, either side of the iteration or both sides may be passed via context, as convenient. The key situation points are: complicated code that needs to be invoked in "layers", with multiple combinations of layer types possible at the bottom layer, the information from top layers needs to be visible. The key implementation points are: use of context class to access the data from all levels of iteration complicated iteration code encapsulated in implementation of pure virtual function two interfaces - one aware of underlying iterator, one not aware of it. use of const & to simplify the usage syntax.

    Read the article

  • Which httpd.conf and php.ini files does plesk use

    - by Saif Bechan
    If i disable the include_path in /etc/php.ini, i can still see that there is an include path loaded from somewhere. I want to know where this file is. If i use phpinfo(); on my domain i get that there is a include path /usr/share/pear:/usr/share/php, even though in the php.ini i used: ;inlucde_path=".:" In the section for aditional php.ini files this is the list: /etc/php.d/curl.ini, /etc/php.d/dbase.ini, /etc/php.d/dom.ini, /etc/php.d/gd.ini, /etc/php.d/imap.ini, /etc/php.d/json.ini, /etc/php.d/mbstring.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/wddx.ini, /etc/php.d/xmlreader.ini, /etc/php.d/xmlwriter.ini, /etc/php.d/xsl.ini, /etc/php.d/zip.ini I have checked all these files but i can nowhere find a reference to pear. This said the /etc/httpd/conf/httpd.conf file is questioning to me too. If i check this file i can see the following values are defined. DocumentRoot "/var/www/html" <Directory /> Order Deny,Allow Deny from all Options None AllowOverride None </Directory> <Directory "/var/www/html"> Options None AllowOverride None Order allow,deny Allow from all </Directory> The strange thing is that i don't even use this directory, so where does plesk get his httpd.conf file from. I want to check if everything is ok, because my vhost.conf is not working. I don't know if i should change these values or not, and where i can find the values plesk uses. i hope someone can help me with this.

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • Apache2 Modpython : IOError: Write failed, client closed connection.

    - by llazzaro
    This is the error : [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] mod_python (pid=9528, interpreter='realpage.com', phase='PythonHandler', handler='django.core.handlers.modpython'): Application error [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] ServerName: 'realpage.dom' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] DocumentRoot: '/htdocs' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] URI: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Location: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XX.248.60] Directory: None [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Filename: '/htdocs' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] PathInfo: '/' [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] Traceback (most recent call last): [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch\n default=default_handler, arg=req, silent=hlist.silent) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target\n result = _execute_target(config, req, object, arg) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target\n result = object(arg) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/django/core/handlers/modpython.py", line 228, in handler\n return ModPythonHandler()(req) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XXX.248.60] File "/usr/lib/python2.5/site-packages/django/core/handlers/modpython.py", line 220, in call\n req.write(chunk) [Mon Mar 01 12:19:50 2010] [error] [client XXX.XX.248.60] IOError: Write failed, client closed connection. Please! I am sure you need more information in order to find the bug, please tell me what and how to get it. The error is throwing every time!

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Apache + mod_fcgid + perl = error 500

    - by f-aminov
    Hi guys! I'm trying to setup Apache2.2 with mod_fcgid and libapache2-mod-perl2 with no luck. I've created a fcgi-bin directory in the root directory of my website and put there a test.fcgi file with the following content: #!/usr/bin/perl use CGI; print "This is test.fcgi!\n"; While trying to access it via http://www.website.dom/fcgi-bin/test.fcgi I get error 500 (Internal Server Error). Here is my vhost config: <VirtualHost 95.131.29.226:8080> ServerName website.com DocumentRoot /var/www/data/website.com SuexecUserGroup user group ServerAlias www.website.com AddType application/x-httpd-php .php .php3 .php4 .php5 .phtml <Directory "/var/www/data/website.com/fcgi-bin/"> Options +ExecCGI Allow from all Order allow,deny AddHandler fcgid-script .fcgi </Directory> </VirtualHost> fcgid.conf: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi SocketPath /var/lib/apache2/fcgid/sock IdleTimeout 3600 ProcessLifeTime 7200 MaxProcessCount 8 DefaultMaxClassProcessCount 2 IPCConnectTimeout 8 IPCCommTimeout 60 </IfModule> SuExec log: [2010-04-06 03:02:47]: uid: (500/equ) gid: (502/equ) cmd: test.fcgi Apache error log: test! test! [Tue Apr 06 03:02:51 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26267) exit(communication error), terminated by calling exit(), return code: 0 [Tue Apr 06 03:02:53 2010] [notice] mod_fcgid: process /var/www/data/website.com/fcgi-bin/test.fcgi(26261) exit(server exited), terminated by calling exit(), return code: 0 I've no clue why I'm getting error 500, but when I'm trying to access this file using console ($ perl /var/www/data/website.com/fcgin-bin/test.fcgi) everthing works fine without any errors... Any suggestions on how to solve this problem would be greatly appreciated. Thank you!

    Read the article

  • "Unable to initialize module" fileinfo php-pecl-Fileinfo.x86_64

    - by Myers Network
    I have a brand new server server that I am trying to get setup up. This is a 64 bit machine that I can not install "fileinfo" or "memcache". I have uninstalled these and reinstalled them using yum and pecl with no luck. Yum install fine "no error" but then get error when running php. pecl from what I can tell is only installing 32bit. Does not put anything in the lib64 directory. Here is my output from php -v: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.14 (cli) (built: Aug 12 2010 16:03:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Here is some other system info incase you need it uname: Linux server.actham.us 2.6.18-194.26.1.el5 #1 SMP Tue Nov 9 12:54:20 EST 2010 x86_64 x86_64 x86_64 GNU/Linux php -m: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 [PHP Modules] bz2 calendar ctype curl date dbase dom exif filter ftp gd gettext gmp hash iconv imap json ldap libxml mbstring mcrypt mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite readline Reflection session shmop SimpleXML sockets SPL standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib [Zend Modules] Any help would be greatly appreciated, thanks....

    Read the article

  • PHP - Centos OpenSSL error

    - by mabbs
    i'm currently having a problem with OpenSSL on my Centos 6.5 Server. it ran perfectly fine until sunday. and i checked the error_log and i saw this error in the log PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/openssl.so' - /usr/lib64/php/modules/openssl.so: cannot open shared object file: No such file or directory in Unknown on line 0 i tried phpinfo(); and i found that openssl is enabled i tried php -m it returned [PHP Modules] bz2 calendar Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext gmp hash iconv interbase json libxml mbstring mcrypt memcache mysql mysqli openssl pcntl pcre PDO PDO_Firebird pdo_mysql pdo_sqlite Phar pspell readline Reflection session shmop SimpleXML snmp sockets SPL sqlite3 standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib UPDATE this is what i got from rpm -qa | grep php just like what Mike Suggested php-php-gettext-1.0.11-3.el6.noarch php-mcrypt-5.3.3-3.el6.x86_64 php-interbase-5.3.3-3.el6.x86_64 php-pdo-5.3.3-27.el6_5.1.x86_64 php-5.3.3-27.el6_5.1.x86_64 php-mysql-5.3.3-27.el6_5.1.x86_64 php-snmp-5.3.3-27.el6_5.1.x86_64 php-gd-5.3.3-27.el6_5.1.x86_64 php-xml-5.3.3-27.el6_5.1.x86_64 php-pear-1.9.4-4.el6.noarch php-pecl-memcache-3.0.5-4.el6.x86_64 phpMyAdmin-3.5.8.2-1.el6.noarch php-common-5.3.3-27.el6_5.1.x86_64 php-cli-5.3.3-27.el6_5.1.x86_64 php-devel-5.3.3-27.el6_5.1.x86_64 php-mbstring-5.3.3-27.el6_5.1.x86_64 php-xmlrpc-5.3.3-27.el6_5.1.x86_64 php-pspell-5.3.3-27.el6_5.1.x86_64

    Read the article

  • "Unable to initialize module" fileinfo php-pecl-Fileinfo.x86_64

    - by Myers Network
    I have a brand new server server that I am trying to get setup up. This is a 64 bit machine that I can not install "fileinfo" or "memcache". I have uninstalled these and reinstalled them using yum and pecl with no luck. Yum install fine "no error" but then get error when running php. pecl from what I can tell is only installing 32bit. Does not put anything in the lib64 directory. Here is my output from php -v: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.14 (cli) (built: Aug 12 2010 16:03:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Here is some other system info incase you need it uname: Linux server.actham.us 2.6.18-194.26.1.el5 #1 SMP Tue Nov 9 12:54:20 EST 2010 x86_64 x86_64 x86_64 GNU/Linux php -m: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 [PHP Modules] bz2 calendar ctype curl date dbase dom exif filter ftp gd gettext gmp hash iconv imap json ldap libxml mbstring mcrypt mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite readline Reflection session shmop SimpleXML sockets SPL standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib [Zend Modules] Any help would be greatly appreciated, thanks....

    Read the article

  • How to convert a raw disk image to a copy-on-write image based on another image for use with kvm and

    - by Jean-Paul Calderone
    I have a virtual Windows machine running on kvm. Presently it has a 90GB raw disk image. I would like to clone this VM without having to keep two copies of the 90GB raw disk image around. It seems like a good approach for doing this is to make two new qcow or qcow2 images based on the original. First I converted the raw image to a qcow2 image: qemu-img convert -O qcow2 basewindowsxp.img basewindowsxp.qcow2 Then I tried creating a new image backed by this: qemu-img create -F qcow2 -f qcow2 -b `pwd`/basewindowsxp.qcow2 windowsxp-1.qcow2 Then I used virt-manager to point the original VM at windowsxp-1.qcow2. However, when I try to start up the VM in this new configuration, virt-manager reports an error: Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/engine.py", line 588, in run_domain vm.startup() File "/usr/share/virt-manager/virtManager/domain.py", line 150, in startup self._backend.create() File "/usr/lib/python2.6/dist-packages/libvirt.py", line 300, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: internal error unable to start guest: qemu: could not open disk image /var/lib/libvirt/images/windowsxp-1.qcow2 The error suggests that the filename was misspecified or that the filesystem permissions are too restrictive, but neither of these is the case: $ ls -l /var/lib/libvirt/images/windowsxp-1.qcow2 -rwxrwxrwx 1 root root 262144 2010-05-27 08:32 /var/lib/libvirt/images/windowsxp-1.qcow2 Why won't virt-manager start this vm?

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • IRC server connecting to another server

    - by Oxinabox
    I'm setting up an IRC server using IRC-Hybrid, I want my server to connect to another server, so that people on my server can connect to channels on that other server. I know this can be done, the GIMP IRC, is the same as the GNOME IRC My ircd.conf contains the following: connect { name = "aabstractname"; host = "128.64.2.1; send_password = "somepass"; accept_password = "somepass"; encrypted = no; port = 6667; class = "server"; autoconn = yes; compressed = yes; fakename = "irc.sd.dom.asn.au"; }; So when i run: /etc/init.d/ircd-hybrid restart it should be connecting to 128.64.2.1, but the log on 128.64.2.1, doesn't show anything Do I need entry on the host 128.64.2.1? I can't find any documentation for ircd.conf I'ld really like that documentation so I can check all my settings are right.

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • Weekly cron executing 2 times:

    - by yes123
    Hi guys, I have placed a .sh file that runs a php script weekly. This script should run only once, but every sunday it runs at: 1:30 am 6:50 am Any way to fix this? Add1: /etc/cron.weekly/cronweek: #!/bin/bash /usr/bin/php -f /home/my/path/to/script/cronweek.php Add2: crontab file: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # */1 * * * * root /usr/local/rtm/bin/rtm 35 > /dev/null 2> /dev/null

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • error when I use GWT RPC

    - by Sebe
    Hello everyone... I have a problem with Eclipse when I use an RPC.. If I use a single method call it's all in the right direction but if I add a new method to handle the server I get the following error: com.google.gwt.core.client.JavaScriptException: (null): null at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:237) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:126) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:561) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeBoolean(ModuleSpace.java:184) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeBoolean(JavaScriptHost.java:35) at com.google.gwt.user.client.rpc.impl.RpcStatsContext.isStatsAvailable(RpcStatsContext.java) at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:221) at com.google.gwt.http.client.Request.fireOnResponseReceived(Request.java:287) at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:395) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.MethodAdaptor.invoke(MethodAdaptor.java:103) at com.google.gwt.dev.shell.MethodDispatch.invoke(MethodDispatch.java:71) at com.google.gwt.dev.shell.OophmSessionHandler.invoke(OophmSessionHandler.java:157) at com.google.gwt.dev.shell.BrowserChannelServer.reactToMessagesWhileWaitingForReturn(BrowserChannelServer.java:326) at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:207) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:126) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:561) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:269) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at com.google.gwt.core.client.impl.Impl.apply(Impl.java) at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:214) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.MethodAdaptor.invoke(MethodAdaptor.java:103) at com.google.gwt.dev.shell.MethodDispatch.invoke(MethodDispatch.java:71) at com.google.gwt.dev.shell.OophmSessionHandler.invoke(OophmSessionHandler.java:157) at com.google.gwt.dev.shell.BrowserChannelServer.reactToMessages(BrowserChannelServer.java:281) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:531) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:352) at java.lang.Thread.run(Thread.java:619) Can I have more services in an asynchronous call right? Where am I wrong? This is my implementation MyService: package de.vogella.gwt.helloworld.client; import com.google.gwt.user.client.rpc.RemoteService; public interface MyService extends RemoteService { //chiamo i metodi presenti sul server public void creaXML(String nickname,String pass,String email2,String gio,String mes, String ann); public void setWeb(String userCorrect,String query, String titolo,String snippet,String url); } MyServiceAsync package de.vogella.gwt.helloworld.client; import com.google.gwt.user.client.rpc.AsyncCallback; public interface MyServiceAsync { void creaXML(String nickname,String pass,String email2,String gio,String mes, String ann,AsyncCallback<Void> callback); void setWeb(String userCorrect,String query, String titolo,String snippet,String url, AsyncCallback<Void> callback); } RPCService: package de.vogella.gwt.helloworld.client; import com.google.gwt.core.client.GWT; import com.google.gwt.user.client.rpc.AsyncCallback; import com.google.gwt.user.client.rpc.ServiceDefTarget; import com.google.gwt.user.client.ui.FlexTable; public class RPCService implements MyServiceAsync { MyServiceAsync service = (MyServiceAsync) GWT.create(MyService.class); ServiceDefTarget endpoint = (ServiceDefTarget) service; public RPCService() { endpoint.setServiceEntryPoint(GWT.getModuleBaseURL() + "rpc"); } public void creaXML(String nickname,String pass,String email2,String gio,String mes, String ann,AsyncCallback callback) { service.creaXML(nickname, pass, email2, gio, mes, ann, callback); } public void setWeb(String userCorrect,String query, String titolo,String snippet,String url,AsyncCallback callback) { service.setWeb(userCorrect,query, titolo,snippet,url,callback); } } MyServiceImpl package de.vogella.gwt.helloworld.server; import java.io.*; import org.w3c.dom.*; import org.xml.sax.SAXException; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.ParserConfigurationException; import javax.xml.transform.*; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; import de.vogella.gwt.helloworld.client.MyService; import com.google.gwt.user.client.ui.FlexTable; import com.google.gwt.user.server.rpc.RemoteServiceServlet; import com.google.gwt.xml.client.Element; import com.google.gwt.xml.client.NodeList; public class MyServiceImpl extends RemoteServiceServlet implements MyService { //metodo che inserisce il nuovo iscritto public void creaXML(String nickname,String pass,String email2,String gio,String mes, String ann){ ....... } public void setWeb(String userCorrect,String query, String titolo,String snippet,String url) { ..... } In the app in client-side I do RPCService rpc2 = New RPCService() rpc2.setWeb(..,...,...,...,callback); and RPCService rpc = New RPCService() rpc.creaXML(..,...,...,...,callback); (in other posizions in the code...) and.. AsyncCallback callback = new AsyncCallback() { public void onFailure(Throwable caught) { Window.alert("Failure!"); } public void onSuccess(Object result) { Window.alert("Successoooooo"); } }; Web.xml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> <!-- Servlets --> <!-- Default page to serve --> <welcome-file-list> <welcome-file>De_vogella_gwt_helloworld.html</welcome-file> </welcome-file-list> <servlet> <servlet-name>rPCImpl</servlet-name> <servlet-class>de.vogella.gwt.helloworld.server.MyServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>rPCImpl</servlet-name> <url-pattern>/de_vogella_gwt_helloworld/rpc</url-pattern> </servlet-mapping> </web-app> Thank you all for your attention Sebe

    Read the article

  • Convert flyout menu to respond onclick vs mouseover

    - by Scott B
    The code below creates a nifty flyout menu action on a nested list item sequence. The client has called and wants the change the default behavior in which the flyouts are triggered by mouseover, so that you have to click to trigger a flyout. Ideally, I would just like to modify this code so that you click on a small icon (plus/minus) that sits to the right of the menu item if it has child menus. Can someone give me a bit of guidance on what bits I'd need to change to accomplish this? /* a few sniffs to circumvent known browser bugs */ var sUserAgent = navigator.userAgent.toLowerCase(); var isIE=document.all?true:false; var isNS4=document.layers?true:false; var isOp=(sUserAgent.indexOf('opera')!=-1)?true:false; var isMac=(sUserAgent.indexOf('mac')!=-1)?true:false; var isMoz=(sUserAgent.indexOf('mozilla/5')!=-1&&sUserAgent.indexOf('opera')==-1&&sUserAgent.indexOf('msie')==-1)?true:false; var isNS6=(sUserAgent.indexOf('netscape6')!=-1&&sUserAgent.indexOf('opera')==-1&&sUserAgent.indexOf('msie')==-1)?true:false; var dom=document.getElementById?true:false; /* sets time until menus disappear in milliseconds */ var iMenuTimeout=1500; var aMenus=new Array; var oMenuTimeout; var iMainMenusLength=0; /* the following boolean controls the z-index property if needed */ /* if is only necessary if you have multiple mainMenus in one file that are overlapping */ /* set bSetZIndeces to true (either here or in the HTML) and the main menus will have a z-index set in descending order so that preceding ones can overlap */ /* the integer iStartZIndexAt controls z-index of the first main menu */ var bSetZIndeces=true; var iStartZIndexAt=1000; var aMainMenus=new Array; /* load up the submenus */ function loadMenus(){ if(!dom)return; var aLists=document.getElementsByTagName('ul'); for(var i=0;i<aLists.length;i++){ if(aLists[i].className=='navMenu')aMenus[aMenus.length]=aLists[i]; } var aAnchors=document.getElementsByTagName('a'); var aItems = new Array; for(var i=0;i<aAnchors.length;i++){ // if(aAnchors[i].className=='navItem')aItems[aItems.length] = aAnchors[i]; aItems[aItems.length] = aAnchors[i]; } var sMenuId=null; var oParentMenu=null; var aAllElements=document.body.getElementsByTagName("*"); if(isIE)aAllElements=document.body.all; /* loop through navItem and navMenus and dynamically assign their IDs */ /* each relies on it's parent's ID being set before it */ for(var i=0;i<aAllElements.length;i++){ if(aAllElements[i].className.indexOf('x8menus')!=-1){ /* load up main menus collection */ if(bSetZIndeces)aMainMenus[aMainMenus.length]=aAllElements[i]; } // if(aAllElements[i].className=='navItem'){ if(aAllElements[i].tagName=='A'){ oParentMenu = aAllElements[i].parentNode.parentNode; if(!oParentMenu.childMenus) oParentMenu.childMenus = new Array; oParentMenu.childMenus[oParentMenu.childMenus.length]=aAllElements[i]; if(aAllElements[i].id==''){ if(oParentMenu.className=='x8menus'){ aAllElements[i].id='navItem_'+iMainMenusLength; //alert(aAllElements[i].id); iMainMenusLength++; }else{ aAllElements[i].id=oParentMenu.id.replace('Menu','Item')+'.'+oParentMenu.childMenus.length; } } } else if(aAllElements[i].className=='navMenu'){ oParentItem = aAllElements[i].parentNode.firstChild; aAllElements[i].id = oParentItem.id.replace('Item','Menu'); } } /* dynamically set z-indeces of main menus so they won't underlap */ for(var i=aMainMenus.length-1;i>=0;i--){ aMainMenus[i].style.zIndex=iStartZIndexAt-i; } /* set menu item properties */ for(var i=0;i<aItems.length;i++){ sMenuId=aItems[i].id; sMenuId='navMenu_'+sMenuId.substring(8,sMenuId.lastIndexOf('.')); /* assign event handlers */ /* eval() used here to avoid syntax errors for function literals in Netscape 3 */ eval('aItems[i].onmouseover=function(){modClass(true,this,"activeItem");window.clearTimeout(oMenuTimeout);showMenu("'+sMenuId+'");};'); eval('aItems[i].onmouseout=function(){modClass(false,this,"activeItem");window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout("hideMenu(\'all\')",iMenuTimeout);}'); eval('aItems[i].onfocus=function(){this.onmouseover();}'); eval('aItems[i].onblur=function(){this.onmouseout();}'); //aItems[i].addEventListener("keydown",function(){keyNav(this,event);},false); } var sCatId=0; var oItem; for(var i=0;i<aMenus.length;i++){ /* assign event handlers */ /* eval() used here to avoid syntax errors for function literals in Netscape 3 */ eval('aMenus[i].onmouseover=function(){window.clearTimeout(oMenuTimeout);}'); eval('aMenus[i].onmouseout=function(){window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout("hideMenu(\'all\')",iMenuTimeout);}'); sCatId=aMenus[i].id; sCatId=sCatId.substring(8,sCatId.length); oItem=document.getElementById('navItem_'+sCatId); if(oItem){ if(!isOp && !(isMac && isIE) && oItem.parentNode)modClass(true,oItem.parentNode,"hasSubMenu"); else modClass(true,oItem,"hasSubMenu"); /* assign event handlers */ eval('oItem.onmouseover=function(){window.clearTimeout(oMenuTimeout);showMenu("navMenu_'+sCatId+'");}'); eval('oItem.onmouseout=function(){window.clearTimeout(oMenuTimeout);oMenuTimeout=window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout(\'hideMenu("navMenu_'+sCatId+'")\',iMenuTimeout);}'); eval('oItem.onfocus=function(){window.clearTimeout(oMenuTimeout);showMenu("navMenu_'+sCatId+'");}'); eval('oItem.onblur=function(){window.clearTimeout(oMenuTimeout);oMenuTimeout=window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout(\'hideMenu("navMenu_'+sCatId+'")\',iMenuTimeout);}'); //oItem.addEventListener("keydown",function(){keyNav(this,event);},false); } } } /* this will append the loadMenus function to any previously assigned window.onload event */ /* if you reassign this onload event, you'll need to include this or execute it after all the menus are loaded */ function newOnload(){ if(typeof previousOnload=='function')previousOnload(); loadMenus(); } var previousOnload; if(window.onload!=null)previousOnload=window.onload; window.onload=newOnload; /* show menu and hide all others except ancestors of the current menu */ function showMenu(sWhich){ var oWhich=document.getElementById(sWhich); if(!oWhich){ hideMenu('all'); return; } var aRootMenus=new Array; aRootMenus[0]=sWhich var sCurrentRoot=sWhich; var bHasParentMenu=false; if(sCurrentRoot.indexOf('.')!=-1){ bHasParentMenu=true; } /* make array of this menu and ancestors so we know which to leave exposed */ /* ex. from ID string "navMenu_12.3.7.4", extracts menu levels ["12.3.7.4", "12.3.7", "12.3", "12"] */ while(bHasParentMenu){ if(sCurrentRoot.indexOf('.')==-1)bHasParentMenu=false; aRootMenus[aRootMenus.length]=sCurrentRoot; sCurrentRoot=sCurrentRoot.substring(0,sCurrentRoot.lastIndexOf('.')); } for(var i=0;i<aMenus.length;i++){ var bIsRoot=false; for(var j=0;j<aRootMenus.length;j++){ var oThisItem=document.getElementById(aMenus[i].id.replace('navMenu_','navItem_')); if(aMenus[i].id==aRootMenus[j])bIsRoot=true; } if(bIsRoot && oThisItem)modClass(true,oThisItem,'hasSubMenuActive'); else modClass(false,oThisItem,'hasSubMenuActive'); if(!bIsRoot && aMenus[i].id!=sWhich)modClass(false,aMenus[i],'showMenu'); } modClass(true,oWhich,'showMenu'); var oItem=document.getElementById(sWhich.replace('navMenu_','navItem_')); if(oItem)modClass(true,oItem,'hasSubMenuActive'); } function hideMenu(sWhich){ if(sWhich=='all'){ /* loop backwards b/c WinIE6 has a bug with hiding display of an element when it's parent is already hidden */ for(var i=aMenus.length-1;i>=0;i--){ var oThisItem=document.getElementById(aMenus[i].id.replace('navMenu_','navItem_')); if(oThisItem)modClass(false,oThisItem,'hasSubMenuActive'); modClass(false,aMenus[i],'showMenu'); } }else{ var oWhich=document.getElementById(sWhich); if(oWhich)modClass(false,oWhich,'showMenu'); var oThisItem=document.getElementById(sWhich.replace('navMenu_','navItem_')); if(oThisItem)modClass(false,oThisItem,'hasSubMenuActive'); } } /* add or remove element className */ function modClass(bAdd,oElement,sClassName){ if(bAdd){/* add class */ if(oElement.className.indexOf(sClassName)==-1)oElement.className+=' '+sClassName; }else{/* remove class */ if(oElement.className.indexOf(sClassName)!=-1){ if(oElement.className.indexOf(' '+sClassName)!=-1)oElement.className=oElement.className.replace(' '+sClassName,''); else oElement.className=oElement.className.replace(sClassName,''); } } return oElement.className; /* return new className */ } //document.body.addEventListener("keydown",function(){keyNav(event);},true); function setBubble(oEvent){ oEvent.bubbles = true; } function keyNav(oElement,oEvent){ alert(oEvent.keyCode); window.status=oEvent.keyCode; return false; }

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >