Search Results

Search found 15389 results on 616 pages for 'external js'.

Page 2/616 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Real performance of node.js

    - by uther.lightbringer
    I've got a question concerning node.js performance. There is quite lot of "benchmarks" and a lot of fuss about great performance of node.js. But how does it stand in real world? Not just process empty request at high speed. If someone could try to compare this scenario: Java (or equivalent) server running an application with complex business logic between receiving request and sending response. How would node.js deal with it? If there was need for a lot of JavaScript processing on server side, is node.js really so fast that it can execute JavaScript, and stand a chance against more heavyveight competitors?

    Read the article

  • Knockout.js mapping plugin with require.js

    - by Ravi
    What is the standard way of loading mapping plugin in require.js ? Below is my config.js (require.js config file) require.config({ // Initialize the application with the main application file. deps:["app"], paths:{ // JavaScript folders. libs:"lib", plugins:"lib/plugin", templates:"../templates", // Libraries. jquery:"lib/jquery-1.7.2.min", underscore:"lib/lodash", text:'text', order:'order', knockout:"lib/knockout", knockoutmapping:"lib/plugin/knockout-mapping" }, shim:{ underscore:{ exports:'_' }, knockout:{ deps:["jquery"], exports:"knockout" } } } In my view model define(['knockout', 'knockoutmapping'], function(ko, mapping) { } However, mapping is not bound to ko.mapping. Any pointers/suggestions would be appreciated. Thanks, Ravi

    Read the article

  • a server side mustache.js example using node.js

    - by onecoder4u
    I'm looking for an example using mustache.js with node.js here is my example but it is not working. Mustache is undefined. I'm using Mustache.js from the master branch. var sys = require('sys'); var m = require("./mustache"); var view = { title: "Joe", calc: function() { return 2 + 4; } }; var template = "{{title}} spends {{calc}}"; var html = Mustache().to_html(template, view); sys.puts(html);

    Read the article

  • Plex Media Server: Won't find media External Hard Drive

    - by grayed01
    I am attempting to turn an older PC into a home media server with Ubuntu 12.04 using Plex Media Server. I have a newer WD 2 TB external USB HD with all of my media on it. I can not for the life of me figure out why Plex will not recognize the files on this hard drive. It shows the external as there, Ubuntu shows the files and allows me to play them, view them, etc. Plex shows the name "External". But when I click it, it is 100% empty and shows nothing to add. I can access the files on the external through file sharing just fine, on my windows computers but would love to be able to use Plex for streaming with our Roku. I am fairly new to Ubuntu, I have used Plex with the same HD on Windows and it worked fine. I have read multiple articles on this and nothing seems to be doing the trick. How can I solve this?

    Read the article

  • Node.js as a custom (streaming) upload handler for Django

    - by Gijs
    I want to build an upload-centric app using Django. One way to do this is with nginx's upload module (nonblocking) but it has its problems. Node.js is supposed to be a good candidate for this type of application. But how can I make node.js act as an upload_handler() for Django (http://docs.djangoproject.com/en/1.1/topics/http/file-uploads/#modifying-upload-handlers-on-the-fly) I'm not sure where to look for examples?

    Read the article

  • Laptop screen blank after login when external monitor is not connected

    - by Ramon Suarez
    Ubuntu does not switch back automatically to only monitor present when booting after disconnecting external monitor. Here's a video showing what happens. I get to the login window and everything looks ok, then I type my password, the desktop image shows up and... everything goes blank. It does not happen when I just login as a guest. When possible I work with my laptop connected to an external screen via the VGA port. The problem comes when I boot the computer without that secondary screen connected: The login screen comes out ok. After login the screen goes black, but I can hear the login sound. If I hit ctr + alt + backwards-delete and login again sometimes it is fixed, but not all. If I log in as a different user everything is OK. Then I log in as my user and sometimes it works. To have a screen I have to plug a monitor. Although I have turned on the laptop display with that monitor on, if I reboot it goes blank again after login, even if I turn off the external monitor before turning off the computer. I've managed to get my screen back with my username after going into recovery mode, but only sometimes. Failsafe would not load after second screen asking me what I wanted to do (no mouse to click nor keyboard working). My computer is a LDLC Aurore BB1-i5 -8 -S1. Which is the configuration file that keeps the information about the monitors using Displays under lightgdm and where is it? I guess if I could edit it I may have a chance :) One of the things I tried following a solution in another post was removing my monitors.xml file, but it does not work and I don't know how to create a good one that I could use now. When doing DISPLAY=:0 xrandrI get: Screen 0: minimum 320 x 200, current 320 x 200, maximum 8192 x 8192 LVDS1 connected (normal left inverted right x axis y axis) 1366x768 60.0 + 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis) This is the full dmesg after activating sudo xdiagnoseas Bryce sugested. (If you tell me the relevant parts I will paste them here) When conecting the external monitor, only the external will work, although I can see using Displays that the computer thinks that both are working. I've asked the question in Launchpad but have it keeps on expiring without any feedback. In my opinion Ubuntu should be able to detect automatically that there is no external monitor present and switch to the laptop monitor. There's a similar question here, but it does not apply to my case External monitor set as primary even when disconnected from laptop Update: For clarification, the problem happens only with my user and once I log in. I even get to see the screensaver for about a second, and then it goes blank. Tried Bryce's example (see his answer below), but it did not work. This is the info I get from tty1 with Display=:0 xrandr: – Ramon Suarez Jul 9 at 16:36 Screen 0: minimum 320 x 200, current 320 x 200, maximum 8192 x 8192 LVDS1 connected (normal left inverted right x axis y axis) 1366x768 60.0 + 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis)

    Read the article

  • redirecting in node.js behind mod_rewrite proxy

    - by chmanie
    I have a node.js application running behind an Apache mod_rewrite proxy configured in a .htaccess file like this: RewriteCond %{HTTP_HOST} =mydomain.com [OR] RewriteCond %{HTTP_HOST} =www.mydomain.com RewriteRule (.*) http://localhost:3000/$1 [QSA,P] When I now do a redirect (e.g. express' res.redirect()) inside my node.js application (which runs on port 3000), the user is always redirected to http://localhost:3000/ (which is in fact exactly what is defined above but not the desired behaviour). Is there any way around this?

    Read the article

  • External monitor problems with Asus UX32VD in 12.04

    - by rsilva
    I've posted this video where I show one of the problems I'm facing with my UX32VD in regard of external monitors. As you can see as soon as I plugged the external monitor the laptop screen goes black and the external monitor displays a single color (color changes between red, green and blue). However this does not always happen. For instance if I reboot the laptop with the external monitor already plugged in I get to use it but not the laptop screen. In the Displays settings the laptop screen it's not even listed. Other times it just works. The to screens, without any problems. I tried with other to different external monitor and all this apparently random issues repeat, I even tried using HDMI cable instead. Same random behavior. This question seams to cover one of this issues however the answer did not helped me. I was forced to use the 3.5.2 kernel in order to use the Intel graphics card else the Gallium something was used. Also I've Bumblebee installed.

    Read the article

  • External display resolution issue

    - by Steven
    I'm new to Ubuntu. I've 'dabbled' in the past but Windows was annoying me too much so I finally made the change. One thing I use my laptop for is outputting the screen onto an external monitor. In Windows this usually works fine (but when there is an issue, it's a pain to resolve and has crashed my computer on many occasions). In Ubuntu it's a very simple case of "plug and play" because it displays the content immediately with no major problems. However, I can only alter the resolution of the external display to 4:3 formats, whereas the external monitor is 16:9, so the picture is stretched. In Windows there's loads of resolutions to choose from. If I disable the laptop's screen, it still doesn't allow me to display the correct resolution externally (on VGA). Further to that, if I close the laptop it keeps the external screen as it is, but when I reopen the laptop, the laptop screen doesn't turn back on until I restart. If it's a video I am watching, I can use VLC to alter the aspect ratio so the video looks fine, but it's quite an annoying work around. I used terminal to find my graphics controller, which is "Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07)" (not great, I know... the laptop was a free gift with my mobile!) Any idea of how I can get the resolution on the external monitor to be correct?

    Read the article

  • Can't authenticate mobile client with node.js (using passport.js)

    - by Pazinio
    I'm trying to build some CRUD application with node.js as a back-end API (express) and web-app (backbone) and mobile client (native android) as front-ends.(I'm node.js beginner) My server solution is based on the following great tutorial 'easy-node-authentication'. In my android app I have managed to get the user Google-Token after I completed the authentication step with Google Plus SDK.(mobile to google-plus directly request). I'm trying to understand and find right and elegant way to re-use a given google-token and authenticate again my android user through Google-Plus account to ensure the mobile client holds real token, then add a new entry (id, token, email, name) to my users table DB within my node back-end. The question is: what should be my next step in case I want to keep my back-end without changes? should I send a GET request with the token as a cookie to /auth/google? maybe to /auth/google/callback? another URL? Does this make sense at all? Please note: I'm aware to the fact the mentioned above 'easy-node-auth' solution is based on sessions and cookies. having said that, i'm still trying to understand if there is a convenient way to integrate both (android and node) as it works good for my web-app and node. Thanks in advance.

    Read the article

  • General directions on developing a server side control system for JS/Canvas Action RPG

    - by Billy Ninja
    Well, yesterday I asked on anti-cheat JS, and confirmed what I kind of already knew that it's just not possible. Now I wanna measure roughly how hard it is to implement a server side checking that is agnostic to client input, that does not mess with the game experience so much. I don't wanna waste to much resource on this matter, since it's going to be initially a single player game, that I may or would like to introduce some kind of ranking, trading system later on. I'd rather deliver better more cool game features instead. I don't wanna have to guarantee super fast server response to keep the game going lag free. I'd rather go with more loose discrete control of key variables and instances. Like store user's action on a fifo buffer on the client, and push that actions to the server gradually. I'd love to see a elegant, generic solution that I could plug into my client game logic root (not having to scatter treatments everywhere in my client js) - and have few classes on Node.js server that could handle that - without having to mirror/describe all of my game entities a second time on the server.

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • js function causing all other functions not to work inside js file

    - by Camran
    In safari 4 and all explorer browsers, whenever I try to call a function inside a javascript file which contains this function below, that first function isn't called. So calling function1 will not work if function2 is inside the same .js file, explanation? Here is the code which makes the problem. Whenever I remove this function, everything works fine and all functions work fine. So this function is causing a problem. function addOption(selectbox, value, text, class, id_nr ) { var optn = document.createElement("OPTION"); optn.text = text; optn.value = value; optn.id = value; if (class==1){ optn.className = "nav_option_main"; } selectbox.options.add(optn); } Any ideas why? Thanks

    Read the article

  • Best way to use Cradle with Express.js (CouchDB, Node.js)

    - by Costa
    I'm building my website ( http://tedxgramercy.jit.su ) with express.js and so far I've been using the http.request method in node to access couch, and that's been cool. I've learned lots about how http, couch, and node work, which is awesome. Anyways, I'm thinking of moving over to cradle now (Let me know if you have a strong opinion about this) and I'd like to know the "right" way to set this up. Should I... require() cradle and make a new connection to my db in each separate route? create my database connection once, and then just pass that connection by require()ing the connection in each route? (if so, how do I do that?) Thanks!!!

    Read the article

  • How to write reusable code in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain some code which does something else than just reading data. Let's suppose I want to call this method multiple times, each time with a different success callback. Since the callback is included in the method itself, I can't find a clean way to do this without either duplicating the method or specifying all possible callbacks in a long switch statement. What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • Dual-boot computer won't boot without external hard drive

    - by FrankP
    I have Ubuntu loaded on my external HDD. I tried to unplug the external drive so that this way I could run Windows as the default OS to boot when the computer turns on, but it gives me an error. I need to know how I can make it so that when my computers boots it stops saying Error: no such device: (a whole bunch of numbers and letters) then it says grub rescue>_. If I plug the external HDD in, and I let Ubuntu run the boot process, then it gives me a list of OS's/ HDD's to choose from and Windows 7 is there. The only problem is that I want Windows be my default OS, not the other way around. P.S. I have found that I dislike Ubuntu because I can't even figure out how to install the necessary programs to learn how to start writing Ruby On Rails. So installing it was a waste of my time, in my opinion. Now that I have it on the external hard drive, I will leave it installed though. I just dont want to have to keep that external drive plugged in to my computer all the time. Thank you a ton to whoever can help me! Thank you for the detail'd instructions. I am doing my best to follow you and it makes sense when I read it but, Rescatux is not doing what you said it would. None of the options you said would appear are not there. On my screen there is 4 options when MBR run's none look familiar and when I picked the best possible option based on my educated guesses it said success. I tried to restart my computer and it said Please insert windows recovery disc and hit enter. Problem being I don't have the windows recovery disc. I bought my computer from a local Computer tec and he loads windows on it for you. I have no time to run my compute over to him as sunday is my only day free. I think that I just wrecked my computer in the process of this attempted fix windows refuses to boot now WITH or WITHOUT the HDD. Please help this is getting out of hand

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • Implementing a JS templating engine with current PHP project

    - by SeanWM
    I'm currently working on a PHP project and quickly realizing how useful a templating engine would help me. I have a few tables whose table rows are looped out via a PHP loop. Is it possible to use just a JS templating engine (like Handlebarsjs) to also work with these tables? For example: $arr = array('red', 'green', 'blue'); echo '<table>'; foreach($arr as $value) { echo '<tr><td>' . $value . '</td></tr>'; } echo '</table>'; Now I want to add a column via an ajax call using a JS templating engine. Is this possible? Or do I have to use a templating engine for both server side and client side?

    Read the article

  • How to write loosely coupled classes in node.js

    - by lortabac
    I am trying to understand how to design node.js applications, but it seems there is something I can't grasp about asynchronous programming. Let's say my application needs to access a database. In a synchronous environment I would implement a data access class with a read() method, returning an associative array. In node.js, because code is executed asynchronously, this method can't return a value, so, after execution, it will have to "do" something as a side effect. It will then contain at least 1 line of extraneous code which has nothing to do with data access. Multiply this for all methods and all classes and you will very soon have an unmanageable "code soup". What is the proper way to handle this problem? Am I approaching it the wrong way?

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • yui compressor maven plugin doesnt compress the js files

    - by hanumant
    I am using yui compressor to compress the js files in my web app. I have configured the plugin as indicated on yui maven plugin site yui compressor maven plugin. This is the pom plugin conf <plugin> <groupId>net.sf.alchim</groupId> <artifactId>yuicompressor-maven-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <phase>compile</phase> <goals> <goal>jslint</goal> <goal>compress</goal> </goals> </execution> </executions> <configuration> <failOnWarning>true</failOnWarning> <nosuffix>true</nosuffix> <force>true</force> <aggregations> <aggregation> <!-- remove files after aggregation (default: false) --> <removeIncluded>false</removeIncluded> <!-- insert new line after each concatenation (default: false) --> <insertNewLine>false</insertNewLine> <output>${project.basedir}/${webcontent.dir}/js/compressedAll.js</output> <!-- files to include, path relative to output's directory or absolute path--> <!--inputDir>base directory for non absolute includes, default to parent dir of output</inputDir--> <includes> <include>**/autocomplete.js</include> <include>**/calendar.js</include> <include>**/dialogs.js</include> <include>**/download.js</include> <include>**/folding.js</include> <include>**/jquery-1.4.2.min.js</include> <include>**/jquery.bgiframe.min.js</include> <include>**/jquery.loadmask.js</include> <include>**/jquery.printelement-1.1.js</include> <include>**/jquery.tablesorter.mod.js</include> <include>**/jquery.tablesorter.pager.js</include> <include>**/jquery.dialogs.plugin.js</include> <include>**/jquery.ui.autocomplete.js</include> <include>**/jquery.validate.js</include> <include>**/jquery-ui-1.8.custom.min.js</include> <include>**/languageDropdown.js</include> <include>**/messages.js</include> <include>**/print.js</include> <include>**/tables.js</include> <include>**/tabs.js</include> <include>**/uwTooltip.js</include> </includes> <!-- files to exclude, path relative to output's directory--> </aggregation> </aggregations> </configuration> <dependencies> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <scope>compile</scope> <version>1.6R5</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency><dependency> <groupId>net.sf.retrotranslator</groupId> <artifactId>retrotranslator-runtime</artifactId> <version>1.2.9</version> <scope>runtime</scope> </dependency> </dependencies> </plugin> And here is the log at compress time These will use the artifact files already in the core ClassRealm instead, to allow them to be included in PluginDescriptor.getArtifacts(). [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:jslint' [DEBUG] (f) failOnWarning = true [DEBUG] (f) jswarn = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:jslint {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:compress' -- [DEBUG] (f) removeIncluded = false [DEBUG] (f) insertNewLine = false [DEBUG] (f) output = C:\test\WebContent\js\compressedAll.js [DEBUG] (f) includes = [**/autocomplete.js, **/calendar.js, **/dialogs.js, **/download.js, **/folding.js, **/jquery-1.4.2.min.js, **/jquery.bgifram e.min.js, **/jquery.loadmask.js, **/jquery.printelement-1.1.js, **/jquery.tablesorter.mod.js, **/jquery.tablesorter.pager.js, **/jquery.dialogs.p lugin.js, **/jquery.ui.autocomplete.js, **/jquery.validate.js, **/jquery-ui-1.8.custom.min.js, **/languageDropdown.js, **/messages.js, **/print.js, * */tables.js, **/tabs.js, **/uwTooltip.js] [DEBUG] (f) aggregations = [net.sf.alchim.mojo.yuicompressor.Aggregation@65646564] [DEBUG] (f) disableOptimizations = false [DEBUG] (f) encoding = Cp1252 [DEBUG] (f) failOnWarning = true [DEBUG] (f) force = true [DEBUG] (f) gzip = false [DEBUG] (f) jswarn = true [DEBUG] (f) linebreakpos = 0 [DEBUG] (f) nomunge = false [DEBUG] (f) nosuffix = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) preserveAllSemiColons = false [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) statistics = true [DEBUG] (f) suffix = -min [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:compress {execution: default}] [INFO] generate aggregation : C:\test\WebContent\js\compressedAll.js [INFO] compressedAll.js (407505b) [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = C:\test\target\test-classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\test , PatternSet [includes: {}, excludes: {**/*.class, **/*.java}]}}] [DEBUG] -- end configuration -- The problem is the files are getting aggregated into one file but without compressing. The link above uses version 1.1 and the plugin version I am using is 0.7.1. Is this going to make any diff. Can someone tell what is wrong here. PS: I have obfuscated some text in log to follow the compliance in my firm. So you may find it mismatching somewhere.

    Read the article

  • External JS usage in FBML - Cannot access external script

    - by santhakr
    Hi, I am trying to create a small facebook app and associate it to a fan page in a tab. I am trying to include an external javascript file in my page and call a method on a button click event. Below is a part of the code <script language="Javascript" src="http://mysite.com/fb.js"></script> <input type="button" value="Click....." onClick="javascript:showDialog();" /> content of fb.js is as below function showDialog() { new Dialog().showMessage('Dialog', 'Button clickeed'); } When I load the tab in my fan page, it shows an error "Cannot allow external script", whereas when I load the canvas url [http://apps.facebook.com/...] directly and click on the button, it works [shows the dialog]. Does script include works only on the canvas and not on the profile page? I have another question though Initially I had the script src as a relative path but it errored out with the same error - "Cannot allow external script". Can't I use relative path for the external scripts?

    Read the article

  • Laptop with Intel Graphics: external VGA monitor only gets signal on boot (no "hot plugging")

    - by iveand
    I am able to get an external VGA monitor (or projector) to work if I start my laptop with it connected. However, if I start the laptop with it disconnected there is no signal on the external. The Displays screen shows the external, and thinks that it is active, but there is no signal being sent to it. This has been a persistent problem since 10.04 (I am now on 12.04.... each upgrade hoping something is improved). I should note that even when it works (starting with display connected), Displays still says the monitor is "unknown" (but it sends the signal). For the correct resolution to display, I have had to add a few xrandr lines for my monitor to my .xprofile file... otherwise resolution is limited to default 1024x768. So, resolution issues can be worked around, but the main issue is that the external doesn't get a signal without starting the machine with it connected. I have tried: adding i915.modeset=1 to grub (also i965.modeset=1 since someone posted that this helped even though lshw shows i915) adding following ppa and doing a dist-upgrade: sudo add-apt-repository ppa:xorg-edgers/ppa Here are the details: Laptop: Toshiba Tecra M10 lspci listings for video: 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller [8086:2a42] (rev 07) sudo lshw -C video listing: *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:ff400000-ff7fffff memory:e0000000-efffffff ioport:cff8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:ffc00000-ffcfffff "System Info" shows my graphics as the following Mobile Intel® GM45 Express Chipset x86/MMX/SSE2

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >