Search Results

Search found 3140 results on 126 pages for 'creation'.

Page 71/126 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How to position an element so that it does not flow off the visible screen

    - by rjray
    I am creating pseudo-tooltips on a page that has a lot of "a" and "span" elements that have these tips associated with them. Everything in the creation of the element is fine, and it displays fine. However, since this is a page with a lot of data, as you get towards the bottom of the visual area the tooltips start to flow past the bottom edge of the window. My initial attempt to compensate for this with window.innerWidth/innerHeight didn't come out too well. I'm using jQuery for DOM manipulation (but not jQuery UI). Given the event itself, and the height and width of the tooltip (which I can get with getBoundingClientRect()), how can I position this element so that the bottom of the tooltip is never below the edge of the window?

    Read the article

  • Recommendations for 'C' Project architecture guidelines?

    - by SiegeX
    Now that I got my head wrapped around the 'C' language to a point where I feel proficient enough to write clean code, I'd like to focus my attention on project architecture guidelines. I'm looking for a good resource that coves the following topics: How to create an interface that promotes code maintainability and is extensible for future upgrades. Library creation guidelines. Example, when should I consider using static vs dynamic libraries. How to properly design an ABI to cope with either one. Header files: what to partition out and when. Examples on when to use 1:1 vs 1:many .h to .c Anything you feel I missed but is important when attempting to architect a new C project. Ideally, I'd like to see some example projects ranging from small to large and see how the architecture changes depending on project size, function or customer. What resource(s) would you recommend for such topics?

    Read the article

  • Guice creates Swing components outside of UI thread problem?

    - by Boris Pavlovic
    I'm working on Java Swing application with Google Guice as an IOC container. Things are working pretty well. There are some UI problems. When a standard L&F is replaced with Pushing pixels Substance L&F application is not running due to Guice's Swing components creation outside of UI thread. Is there a way to tell Guice to create Swing components in the UI thread? Maybe I should create custom providers which will return Swing components after SwingUtilities.invokeAndWait(Runnable) creates them. I don't like the idea of running the whole application in UI thread, but maybe it's just a perfect solution.

    Read the article

  • GUI Design - Roleselection with two lists, selected roles on the right, or left list?

    - by subes
    Hi, image two lists for selecting roles for a new user creation in an administrator frontend. One list has all available roles in it and another has the selected roles in it. Between those lists are buttons to move elements from one list to another. Thus the layout is horizontal with [List] [Buttons] [List]. Now to the question: Should the selected elements be on the left, or the right list? Intuitively some people I asked say, that the selected elements have to be on the right list. But some other people follow some sort of guideline that says, that the more important stuff in a frontend has to be on the left side. Also they think the selected elements are more important than the non-selected and so the selected ones have to be on the left side. Whats your opinion?

    Read the article

  • Generate SQL Server Express database from Entity Framework 4 model

    - by Cranialsurge
    I am able to auto-generate a SQL Server CE 4.0 *.sdf file using code-first generation as explained by Scott Guthrie here. The connection string for the same is as follows: <add name="NerdDinners" providerName="System.Data.SqlServerCe.4.0" connectionString="data source=|DataDirectory|NerdDinner.sdf"/> However if I try to generate an mdf instead using the following connection string, it fails to do so with the following error - "The provider did not return a ProviderManifestToken string.". <add name="NerdDinners" providerName="System.Data.SqlClient" connectionString="data source=|DataDirectory|NerdDinner.mdf"/> Even directly hooking into a SQLEXPRESS instance using the following connection string fails <add name="NerdDinners" providerName="System.Data.SqlClient" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=NerdDinner;Integrated Security=True"/> Does EF 4 only support SQL CE 4.0 for database creation from a model for now or am I doing something wrong here?

    Read the article

  • Is dependency injection possible for JSP beans?

    - by kazanaki
    This may be a long shot question.. I am working on an application that is based on JSP/Javascript only (without a Web framework!) Is there a way to have depencency injection for JSP beans? By jsp beans I mean beans defined like this <jsp:useBean id="cart" scope="session" class="session.Carts" /> Is there a way/library/hack to intercept the bean creation so that when "cart" is referenced for the first time, some some of injection takes place? Can I define somewhere a "listener" for JSP beans (like you can do for JSF beans for example)? I am free to do anything I want in the back-end, but I cannot add a web framework in the front-end (Don't ask!)

    Read the article

  • How to write stored procedures to separate files with mysqldump?

    - by Jader Dias
    The mysqldump option --tab=path writes the creation script of each table in a separate file. But I can't find the stored procedures, except in the screen dump. I need to have the stored procedures also in separate files. The current solution I am working on is to split the screen dump programatically. Is there a easier way? The code I am using so far is: mysqldump -p$PASSWORD --routines --skip-dump-date --no-create-info --no-data --skip-opt $DATABASE > $BACKUP_PATH/$DATABASE.sql mysqldump -p$PASSWORD --tab=$BACKUP_PATH --skip-dump-date --no-data --skip-opt $DATABASE

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Implementing a two-way communication between Microsoft Dynamics CRM and 3rd party app

    - by CxDoo
    I need to implement a bi-directional communication between Microsoft Dynamics CRM and a 3rd party server. The ideal scenario is as follows: User tries to create an entity in CRM In pre-create hook a 3rd party library function is called (or web service or whatever), filled with relevant info, which tries to create the respective entity on the server If the call fails, creation fails in CRM If the call succeeds, the entity is created in the CRM AND additional fields are filled with return values from the call More specifically, I want to do something like this when user tries to create a new entity instance: try { ExternalWebService.CreateTrade(ref TradeInfo info) //this was initialized on the external server myCRM_Trade_Entity.SerialNo = info.SerialNo; CreateNew(myCRM_Trade_Entity); } catch (whatever) { fail; } What would be the suggested way to do this? I am new to Dynamics, have read about Workflows and Plugins but am not sure how should I do this properly.

    Read the article

  • runtime loading of ValidateAntiForgeryToken Salt value

    - by p.campbell
    Consider an ASP.NET MVC application using the Salt parameter in the [ValidateAntiForgeryToken] directive. The scenario is such that the app will be used by many customers. It's not terribly desirable to have the Salt known at compile time. The current strategy is to locate the Salt value in the web.config. [ValidateAntiForgeryToken(Salt = Config.AppSalt)] //Config.AppSalt is a static property that reads the web.config. This leads to a compile-time exception suggesting that the Salt must be a const at compile time. An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type How can I modify the application to allow for a runtime loading of the Salt so that the app doesn't have to be re-salted and recompiled for each customer? Consider that the Salt won't change frequently, if at all, thereby removing the possibility of invalidating form

    Read the article

  • ASP.Net MVC elegant UI and ModelBinder authorization

    - by SDReyes
    We know authorization stuff is a cross cutting concern, and we do anything we could to avoid merge business logic in our views. But I still not found an elegant way to filter UI components (e.g. widgets, form elements, tables, etc) using the current user roles without contaminate the view with business logic. same applies for model binding. Example Form: Product Creation Fields: Name Price Discount Roles: Role Administrator Is allowed to see and modify the Name field Is allowed to see and modify the Price field Is allowed to see and modify the Discount Role Administrator assistant Is allowed to see and modify the Name Is allowed to see and modify the Price Fields shown in each role are different, and model binding needs to ignore the discount field for 'Administrator assistant' role. How would you do it?

    Read the article

  • How to create a anonymous proxy?

    - by Rakesh Juyal
    I want to create a proxy server anonymous proxy . I googled it and even found some tutorial but those were in PHP. If somebody is having tutorial of proxy server anonymous proxy creation in java then please post it here Or simply let me know what approach should i follow to create a proxy server anonymous proxy. [ i will be using Tomcat { if that matters for your answer } ] Thanks Edit i guess i was not clear in stating what i require. Actually i am trying to develop a site like 'http://proxyug.com/' . If none of you were getting what i asked, then it certainly means such sites are not known as 'proxy server' they must be called something else. :)

    Read the article

  • Creating a data driven web front end quickly

    - by Ilya
    Over the last few years, I can't count how many web front ends I've had to create over a relatively simple database schema to fasciliate data entry. I have to imagine that someone out there has written a framework I can use to simplify the creation of these kind of simple GUIs. Doing a quick google, the following look like the key players in .net: ASP.Net dynamic data framework SubSonic NakedObjects for .net Has anyone worked with any of these and have any preferences? More importantly, are there other frameworks that would be good to evaluate in this space?

    Read the article

  • ASP.NET MVC Session usage

    - by Ben
    Currently I am using ViewData or TempData for object persistance in my ASP.NET MVC application. However in a few cases where I am storing objects into ViewData through my base controller class, I am hitting the database on every request (when ViewData["whatever"] == null). It would be good to persist these into something with a longer lifespan, namely session. Similarly in an order processing pipeline, I don't want things like Order to be saved to the database on creation. I would rather populate the object in memory and then when the order gets to a certain state, save it. So it would seem that session is the best place for this? Or would you recommend that in the case of order, to retrieve the order from the database on each request, rather than using session? Thoughts, suggestions appreciated. Thanks Ben

    Read the article

  • ASP.NET MVC based CMS - dynamic generation of form helpers

    - by user252160
    I am working on an ASP.NET MVC based CMS that presents a rather extreme case. The system must allow the user to add custom content types based on different fields, and for every field, one can add options and validations. The thing is that everything is stored in a complex DB and extracted at runtime using LINQ. I am pretty fresh with ASPNET MVC so the following dilemma came to mind How should I make the content creation view so that form helpers are not predefined int he view code but are loaded based on the type of the field ? Do I have to create a factory class that checks the value of the type property of the field, and then returns a helper based on that or there's a better way to do it. This one seems pretty rigid to me , because anytime I make a change in the Fieldtypes table, I will have to make sure to create a check for that new type too.

    Read the article

  • Self host WCF (Windows Communication Foundation) Ajax services

    - by Wayne Lo
    I am having trouble to understand how to expose the WCF services through Javascript. Here are what I found after days of research: Exposing WCF services through Javascript but not self host: http://msdn.microsoft.com/en-us/library/bb472488.aspx In this example, it requires the creation of a .svc file <%@ServiceHost language="c#" Debug="true" Service="Microsoft.Samples.XmlAjaxService.CalculatorService" Factory="System.ServiceModel.Activation.WebServiceHostFactory" %> Self host WCF Ajax services but not exposing the services through Javascript. http://msdn.microsoft.com/en-us/library/bb943471%28v=VS.100%29.aspx Please help. Thanks.

    Read the article

  • ASP.net SessionState Error in Design Mode

    - by stringo0
    I'm getting a weird error in the design view for a user creation page for 2 controls: Error Creating Control - wCreateUser Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. (There's some more) I've done both of these, but I'm still getting the error in design mode. The controls work fine when compiled, and on the live site - this is just in the Visual Web Developer 2010 Design view for the page. Any ideas as to how I can resolve this? Thanks!

    Read the article

  • Reverting CoreData data

    - by ndg
    I have an NSTableView which is populated via a CoreData-backed NSArrayController. Users are able to edit any field they choose within the NSTableView. When they select the rows that they have modified and press a button, the data is sent to a third-party webservice. Provided the webservice accepts the updated values, I want to commit those values to my persistent store. If, however, the webservice returns an error (or simply fails to return), I want the edited fields to revert to their original values. To complicate matters, I have a number of other editable controls, backed by CoreData, which do not need to resort to this behaviour. I believe the solution to this problem revolves around the creation of a secondary Managed Object context, which I would use only for values edited within that particularly NSTableView. But I'm confused as to how the two MOC would interact with each other. What's the best solution to this problem?

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • In Django, how to create tables from an SQL file when syncdb is run

    - by Sidney
    Hi, How do I make syncdb execute SQL queries (for table creation) defined by me, rather then generating tables automatically. I'm looking for this solution as some particular models in my app represent SQL-table-views for a legacy-database table. So, I've created their SQL-views in my django-DB like this: CREATE VIEW legacy_series AS SELECT * FROM legacy.series; I have a reverse engineered model that represents the above view/legacytable. But whenever I run syncdb, I have to create all the views first by running sql scripts, otherwise syncdb simply creates tables for them (if a view is not found). How do I make syncdb run the above mentioned SQL?

    Read the article

  • File.mkdir is not working and I can't understand why

    - by gotch4
    Hello, I've this brief snippet: String target = baseFolder.toString() + entryName; target = target.substring(0, target.length() - 1); File targetdir = new File(target); if (!targetdir.mkdirs()) { throw new Exception("Errore nell'estrazione del file zip"); } doesn't mattere if I leave the last char (that is usually a slash). It's done this way to work on both unix and windows. The path is actually obtained from the URI of the base folder. As you can see from baseFolder.toString() (baseFolder is of type URI and is correct). The base folder actually exists. I can't debug this because all I get is true or false from mkdir, no other explanations.The weird thing is that baseFolder is created as well with mkdir and in that case it works. Now I'm under windows. the value of target just before the creation of targetdir is "file:/C:/Users/dario/jCommesse/jCommesseDB" if I cut and paste it (without the last entry) in windows explore it works...

    Read the article

  • iOS - conditional compilation (xcode)

    - by sol
    I have created an additional iPad target for what was originally an iphone app. From the Apple docs: "In nearly all cases, you will want to define a new view controller class to manage the iPad version of your application interface, especially if that interface is at all different from your iPhone interface. You can use conditional compilation to coordinate the creation of the different view controllers." But they don't give any example or detail on what conditional compilation is. Can anyone give an example? And where would I do this?

    Read the article

  • How to change the layout of the CreateUserWizard control?

    - by Bazzz
    How to change just the layout (template) of the CreateUserWizard control programmatically? I would to define another layout (not using the horrid table) but continue to use all the event handling and the creation of the user of the CreateUserWizard control. Just for reference, the following code doesn't work, and produces an unexpected result not representing my Template at all. The "InstantiateIn" method of the ITemplate is not called. public partial class b : System.Web.UI.Page { protected void Page_Init(object sender, EventArgs e) { CreateUserWizard createUserWizard = new CreateUserWizard(); createUserWizard.CreateUserStep.ContentTemplate = new Template(); Panel1.Controls.Add(createUserWizard); } } public class Template : ITemplate { void ITemplate.InstantiateIn(Control container) { container.Controls.Add(new TextBox() { ID = "UserName" }); container.Controls.Add(new TextBox() { ID = "Password" }); container.Controls.Add(new TextBox() { ID = "ConfirmPassword" }); container.Controls.Add(new TextBox() { ID = "Email" }); container.Controls.Add(new PlaceHolder() { ID = "ErrorMessage" }); } } }

    Read the article

  • Generate non-identity primary key

    - by MikeWyatt
    My workplace doesn't use identity columns or GUIDs for primary keys. Instead, we retrieve "next IDs" from a table as needed, and increment the value for each insert. Unfortunatly for me, LINQ-TO-SQL appears to be optimized around using identity columns. So I need to query and update the "NextId" table whenever I perform an insert. For simplicity, I do this immediately creating the new object. Since all operations between creation of the data context and the call to SubmitChanges are part of one transaction, do I need to create a separate data context for retrieving next IDs? Each time I need an ID, I need to query and update a table inside a transaction to prevent multiple apps from grabbing the same value. Is a separate data context the only way, or is there something better I could try?

    Read the article

  • How can the javascript plugin architecture in raphael/jquery be done?

    - by TimDog
    I'm looking for a barebones javascript example that demonstrates how the javascript plugin architecture works with large javascript libraries (such as raphael or jquery). In either scenario, you build plugins by ensuring your custom plugin follows this pattern: jQuery.fn.pluginName -- so assume I have a library: myLibrary = (function() { //my fancy javascript code return function() { //my return object }; }); How would fn be incorporated into the above myLibrary object to ensure that he resulting plugin is callable? I instantiate myLibrary like so: var lib = new myLibrary(); And now I have included a reference to my plugin in my page: myLibrary.fn.simplePlugin = function() { //more fancy code } So finally, I can just call: lib.simplePlugin(); Basically, what magic is actually occuring when the .fn is used during the creation of the plugin?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >