Search Results

Search found 35970 results on 1439 pages for 'javascript performance'.

Page 4/1439 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Advantages of using pure JavaScript over JQuery

    - by Shivan Dragon
    What are the advantages of using Javascript-only versus using JQuery-only? I have limited experience with JavaScript and JQuery coding. I've added bits and snippets of each to HTML pages but I've mostly coded server-side stuff in other languages. I've noticed that while you can theoretically do the same things using either of the two approaches (and of course you can even mix 'em up in the same project) there seems to be a tendency to always start using JQuery from the very beginning no-matter what the project demands are. So I'm simply wondering, are there any punctual benefits to not use JQuery-only but instead to just use plain old JavaScript? I know this looks like a non-question because it can be said about it that "there's no definite answer" or "it can be debated for ever", but I'm actually hoping for punctual answers such as "You can do this in one approach and you cannot do it with the other". ==EDIT== As per scrwtp's comment, I'm not referring just to the DOM Handling part. My question is rather: JQuery is a library. For Javascript. What I find strange about this library as opposed to other libraries for other languages is that in JQyery's case it seems to be designed to be able to use it exclusively and not need to touch Javascript directly. This is as opposed to let's say Hibernate and SQL, where even though the library (or rather framework in this case, but I think the analogy still applies) takes the handle on A LOT of aspects, you still get to use SQL when using it, at least for some fringe cases. However in JQuery & Javascript case, you could do anything you do with Javascript using only JQuery (or at least that's how it seems to me).

    Read the article

  • How often is software speed evident in the eyes of customers?

    - by rwong
    In theory, customers should be able to feel the software performance improvements from first-hand experience. In practice, sometimes the improvements are not noticible enough, such that in order to monetize from the improvements, it is necessary to use quotable performance figures in marketing in order to attract customers. We already know the difference between perceived performance (GUI latency, etc) and server-side performance (machines, networks, infrastructure, etc). How often is it that programmers need to go the extra length to "write up" performance analyses for which the audience is not fellow programmers, but managers and customers?

    Read the article

  • Performance: recursion vs. iteration in Javascript

    - by mastazi
    I have read recently some articles (e.g. http://dailyjs.com/2012/09/14/functional-programming/) about the functional aspects of Javascript and the relationship between Scheme and Javascript (the latter was influenced by the first, which is a functional language, while the O-O aspects are inherited from Self which is a prototyping-based language). However my question is more specific: I was wondering if there are metrics about the performance of recursion vs. iteration in Javascript. I know that in some languages (where by design iteration performs better) the difference is minimal because the interpreter / compiler converts recursion into iteration, however I guess that probably this is not the case of Javascript since it is, at least partially, a functional language.

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • Help w/ iPad 1 performance for tile-based DOM Javascript game

    - by butr0s
    I've made a 2D tile-based game with DOM/Javascript. For each level, the map data is loaded and parsed, then lots of tiles ( elements) are drawn onto a larger "map" element. The map is inside of a container that hides overflow, so I can move the map element around by positioning it absolutely. Works a treat on desktop browsers, and my iPad 2. My problem is that performance is really bad on iPad 1. The performance hit is directly related to all the tile elements in my map, because when I remove or reduce the number of tiles drawn, performance improves. Optimizing my collision detection loop has no effect. My first thought was to batch groups of tiles into containers, then hide/show them based on proximity to the player, however this still causes a huge hiccup when the player moves and a new group of tiles is displayed (offscreen). Actually removing the out-of-sight elements from the DOM, then re-adding them as necessary is no faster. Anyone know of any tips that might speed up DOM performance here? My map is 1920 x 1920 pixels, so as far as I know should be within the WebKit texture limit on iOS 5/iPad. The map is being moved with CSS3 transforms, and I've picked all the other obvious low-hanging fruit.

    Read the article

  • The importance of Design Patterns with Javascript, NodeJs et al

    - by Lewis
    With Javascript appearing to be the ubiquitous programming language of the web over the next few years, new frameworks popping up every five minutes and event driven programming taking a lead both server and client side: Do you as a Javascript developer consider the traditional Design Patterns as important or less important than they have been with other languages / environments?. Please name the top three design patterns you, as a Javascript developer use regularly and give an example of how they have helped in your Javascript development.

    Read the article

  • How to use javascript functions, defined in some external *.js file in browser's javascript console?

    - by Dmytro Tsiniavsky
    I would like to know is it possible to save some, for example,simplemath.js file with function ADD(a, b) { return a + b; } simple function, run opera's or some other browser's javascript console, include somehow this (simplemath.js) file, call ADD(2, 5), and get a result in console or execute javascript code on current web page and manipulate with it's content. How can I do that? How can I use javascript functions from external files in web-browser's javascript console?

    Read the article

  • Javascript click function not working

    - by Nabe
    I need to null the value in text box on click, currently I have written a code as such <div class="keyword_non"> <h1>Keywords : <a class="someClass question_off" title="Keywords "></a></h1> <h2><input type="text" name="kw1" value="one" /></h2> <h2><input type="text" name="kw2" value="two" /></h2> <h2><input type="text" name="kw3" value="three" /></h2> <h2><input type="text" name="kw4" value="four" /></h2> </div> <script type="text/javascript" src="/functions/javascript/custom/non_profit_edit.js"></script> <script type="text/javascript" src="/functions/javascript/jquery-1.7.1.min.js"></script> <script type="text/javascript" src="/functions/javascript/custom/jquery-ui-1.8.7.custom.min.js"></script> Inside non_profit_edit.js i have written as such $(document).ready(function(){ $(".kw1").click(function() { $(".kw1").val(" "); }); $(".kw2").click(function() { $(".kw2").val(" "); }); $(".kw3").click(function() { $(".kw3").val(" "); }); $(".kw4").click(function() { $(".kw4").val(" "); }); }); But write now its not working properly... Is this any browser issues or error in code..

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Free Online Performance Tuning Event

    - by Andrew Kelly
      On June 9th 2010 I will be showing several sessions related to performance tuning for SQL Server and they are the best kind because they are free :).  So mark your calendars. Here is the event info and URL: June 29, 2010 - 10:00 am - 3:00 pm Eastern SQL Server is the platform for business. In this day-long free virtual event, well-known SQL Server performance expert Andrew Kelly will provide you with the tools and knowledge you need to stay on top of three key areas related to peak performance...(read more)

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. When we developed the course – we made sure that this course remains practical and demo heavy instead of just theories on this subject. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Understanding Performance Profiling Targets

    In this sample chapter from his upcoming book, Paul Glavich explains performance metrics and walks us through the steps needed to establish meaningful performance targets. He covers many metrics such as "time to first byte" and explains why you should add some contingency into your estimated performance requirements.

    Read the article

  • System Wide Performance Sanity Check Procedures

    - by user702295
    Do you need to boost your overall implementation performance? Do you need a direction to pinpoint possible performance opportunities? Are you looking for a general performance guide? Try MOS note 69565.1.  This paper describes a holistic methodology that defines a systematic approach to resolve complex Application performance problems.  It has been successfully used on many critical accounts.  The 'end-to-end' tuning approach encompasses the client, network and database and has proven far more effective than isolated tuning exercises.  It has been used to define and measure targets to ensure success.  Even though it was checked for relevance on 13-Oct-2008, the procedure is still very valuable. Regards!  

    Read the article

  • Common Javascript mistakes that severely affect performance?

    - by melee
    At a recent UI/UX MeetUp that I attended, I gave some feedback on a website that used Javascript (jQuery) for its interaction and UI - it was fairly simple animations and manipulation, but the performance on a decent computer was horrific. It actually reminded me of a lot of sites/programs that I've seen with the same issue, where certain actions just absolutely destroy performance. It is mostly in (or at least more noticeable in) situations where Javascript is almost serving as a Flash replacement. This is in stark contrast to some of the webapps that I have used that have far more Javascript and functionality but run very smoothly (COGNOS by IBM is one I can think of off the top of my head). I'd love to know some of the common issues that aren't considered when developing JS that will kill the performance of the site.

    Read the article

  • Building Performance Metrics into ASP.NET MVC Applications

    When you're instrumenting an ASP.NET MVC or Web API application to monitor its performance while it is running, it makes sense to use custom performance counters.There are plenty of tools available that read performance counter data, report on it and create alerts based on it. You can then plot application metrics against all sorts of server and workstation metrics.This way, there will always be the right data to guide your tuning efforts.

    Read the article

  • JavaScript parser in JavaScript

    - by emk
    I need to add some lightweight syntactic sugar to JavaScript source code, and process it using a JavaScript-based build system. Are there any open source JavaScript parsers written in JavaScript? And are they reasonably fast when run on top of V8 or a similar high-performance JavaScript implementation? Thank you for any pointers you can provide!

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

  • VM Tuning to enhance performance

    - by Tiffany Walker
    vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 vm.dirty_ratio = 20 vm.min_free_kbytes = 300000 That means that the MOST dirty data that can be in RAM is 20% and that there will always be 300MB RAM that linux CANNOT use to cache files right? What I am trying to do is ensure that there is always room left for service to spawn and use RAM. I have 8GB of ram and hosting websites with PHP so I want to have more free RAM on stand by instead of seeing myself on 50MB of RAM free.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • Performance Monitor (perfmon) showing some unusual statistics

    - by Param
    Recently i have thought to used perfmon.msc to monitor process utilization of remote computer. But i am faced with some peculiar situation. Please see the below Print-screen I have selected three computer -- QDIT049, QDIT199V6 & QNIVN014. Please observer the processor Time % which i have marked in Red Circle. How it can be more than 100%.? The Total Processor Time can never go above 100%, am i right? If i am right? than why the processor time % is showing 200% Please let me know, how it is possible or where i have done mistake. Thanks & Regards, Param

    Read the article

  • Can google “see” this custom javascript code which displays links from an external site to mine

    - by webmasters
    I have a javascript code on my site who displays links from another site. This is what I have on my source before: <script language="JavaScript" type="text/javascript">showLink(1);</script> This is what I have copied from my source after the page has loaded: <script language="JavaScript" type="text/javascript">showLink(1);</script><a rel="nofollow" target="_blank" class="anc" href="http://x5.external_site.net/sc/out.php?s=5483&amp;o=http%3A%2F%2Fwww.bluetooth.com">Bluetooth Devices</a> Can google see this link?

    Read the article

  • extjs - 'Store is undefined'

    - by Jamie
    Hi all, I'm pretty sure this a trivial problem and i'm just being a bit stupid. Your help would be hugely appreciated. In controls/dashboard.js I have: Ext.ill.WCSS.controls.dashboard = { xtype:'portal', region:'center', margins:'35 5 5 0', items:[{ columnWidth: 1, style:'padding:10px', items:[{ title: 'My Cluster Jobs', layout:'fit', html: "test" }] },{ columnWidth: 1, style:'padding:10px', items:[{ title: 'All Cluster Jobs', iconCls: 'icon-queue', html: "test", items: new Ext.grid.GridPanel({ title: 'Cluster Job Queue', store: Ext.ill.WCSS.stores.dashboardClusterJobs, width: 791, height: 333, frame: true, loadMask: true, stateful: false, autoHeight: true, stripeRows: true, floating: false, footer: false, collapsible: false, animCollapse: false, titleCollapse: false, columns:[ { xtype: 'gridcolumn', header: 'Job ID', sortable: true, resizable: true, width: 100, dataIndex: 'JB_job_number', fixed: false }, { xtype: 'gridcolumn', header: 'Priority', sortable: true, resizable: true, width: 100, dataIndex: 'JAT_prio', fixed: false }, { xtype: 'gridcolumn', header: 'User', sortable: true, resizable: true, width: 100, dataIndex: 'JB_owner' }, { xtype: 'gridcolumn', header: 'State', sortable: true, resizable: true, width: 100, dataIndex: 'state' }, { xtype: 'gridcolumn', header: 'Date Submitted', sortable: true, resizable: true, width: 100, dataIndex: 'JAT_start_time' }, { xtype: 'gridcolumn', header: 'Queue', sortable: true, resizable: true, width: 100, dataIndex: 'queue_name' }, { xtype: 'gridcolumn', header: 'CPUs', sortable: true, resizable: true, width: 100, dataIndex: 'slots' } ], bbar: { xtype: 'paging', store: 'storeClusterQueue', displayInfo: true, refreshText: 'Retrieving queue status...', emptyMsg: 'No jobs to retrieve', id: 'clusterQueuePaging' } }) }] }] }; Simple enough, note the reference to 'Ext.ill.WCSS.stores.dashboardClusterJobs' So in stores/dashboard.js I just have this: Ext.ill.WCSS.stores.dashboardClusterJobs = new Ext.data.XmlStore({ storeId: 'storeClusterJobs', record: 'job_list', autoLoad: true, url: 'joblist.xml', idPath: 'job_info', remoteSort: false, fields: [ { name: 'JB_job_number' }, { name: 'JAT_prio' }, { name: 'JB_name' }, { name: 'JB_owner' }, { name: 'state' }, { name: 'JAT_start_time' }, { name: 'slots' }, { name: 'queue_name' } ] }); I run the code and I get 'store is undefined' :S It's confusing me a lot. All of the javascripts have been included in the correct order. i.e. <script type="text/javascript" src="/js/portal.js"></script> <script type="text/javascript" src="/js/stores/dashboard.js"></script> <script type="text/javascript" src="/js/controls/dashboard.js"></script> Thanks guys!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >