Search Results

Search found 5929 results on 238 pages for 'node webkit'.

Page 178/238 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

  • Using Behavior Trees and Events together

    - by weichsem
    I am beginning to work with behavior trees and am unsure how events should be handled within the tree. Lets say we have a space game where the player is dogfighting with a handful of other ships, some friendly some not. The player destroys a ship and the rest of the hostile ships should then start to retreat. How was should the shipWasDestroyed event effect the other ship's behavior trees so that they start running the retreat behavior? One way I could think of doing this is have all the conditions I care about be high level nodes that effectively state change the ship. This would mean I'd have to check all of these state change conditions on every frame the behavior tree was run, even if they are very rare occurrences. I'd prefer not doing this for performance and complexity reasons. From looking at the Halo papers on behavior trees it seems that they handled this by dynamically placing nodes into the tree when the event occurred. It seems like calculating where the new node should go could be problematic depending on the current state of the running behavior. How is this normally handled?

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Problem inserting pre-ordered values to quadtree

    - by Lucas Corrêa Feijó
    There's this string containing B, W and G (standing for black, white and gray). It defines the pre-ordered path of the quadtree I want to build. I wrote this method for filling the QuadTree class I also wrote but it doesn't work, it inserts correctly until it reaches a point in the algorithm it needs to "return". I use math quadrants convention order for insertion, such as northeast, northwest, southwest and southeast. A quadtree has all it's leafs black or white and all the other nodes gray The node used in the tree is called Node4T and has a char as data and 4 references to other nodes like itself, called, respectively NE, NW, SW, SE. public void insert(char val) { if(root==null) { root = new Node4T(val); } else { insert(root, val); } } public void insert(Node4T n, char c) { if(n.data=='G') // possible to insert { if(n.NE==null) { n.NE = new Node4T(c); return; } else { insert(n.NE,c); } if(n.NW==null) { n.NW = new Node4T(c); return; } else { insert(n.NW,c); } if(n.SW==null) { n.SW = new Node4T(c); return; } else { insert(n.SW,c); } if(n.SE==null) { n.SE = new Node4T(c); return; } else { insert(n.SE,c); } } else // impossible to insert { } } The input "GWGGWBWBGBWBWGWBWBWGGWBWBWGWBWBGBWBWGWWWB" is given a char at a time to the insert method and then the print method is called, pre-ordered, and the result is "GWGGWBWBWBWGWBWBW". How do I make it import the string correctly? Ignore the string reading process, suppose the method is called and it has to do it's job.

    Read the article

  • Building a template engine - starting point

    - by Anirudh
    We're building a Django-based project with a template component. This component will be separate from the project as such and can be Django/Python, Node, Java or whatever works. The template has to be rendered into HTML. The templates will contain references to objects with properties that are defined in the DB, say, a Bus. For eg, it could be something like [object type="vehicle" weight="heavy"] and it would have to pull a random object from the DB fulfilling the criteria : type="vehicle" weight="heavy" (bus/truck/jet) and then substitute that tag with an image, say, of a Bus. Also it would have to be able to handle some processing. Eg: What is [X type="integer" lte="10"] + [Y type="integer" lte="10"] [option X+Y correct_ans="true"] [option X-Y correct_ans="false"] [option X+y+1 correct_ans="false"] The engine would be expected to fill in a random integer value <= 10 for X and Y and show radioboxes for each of the options. Would also have to store the fact that the first option is the correct answer. Does it to make sense to write something from the scratch? Or is it better to use an existing templating system (like Django's own templating system) as a starting point? Any suggestions on how I can approach this?

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • XPath & XML EDI B2B

    - by PearlFactory
    GoodToGo :) Best XML Editor is Altova XMLSpy 2011 http://www.torrenthound.com/hash/bfdbf55baa4ca6f8e93464c9a42cbd66450bb950/torrent-info/Altova-XMLSpy-Enterprise-Edition-SP1-2011-v13-0-1-0-h33t-com-Full For whatever reason Piratebay has trojans and other nasties..Search in torrent.eu for Altova XMLSpy Enterprise Edition SP1 2011 v13.0.1.0 Also if you like the product purchase it in a Commercial Enviroment Any well structured/complex XML can be parsed @ the speed of light using XPATH querys and not the C# objects XPathNodeIterator and others etc ....Never do loops  or Genirics or whatever highlevel language technology. Use the power of XPATH i.e Will use a Simple (Do while) as an example. We could have many different techs all achieveing the same result Instead of   xmlNI2 = xmlNav.Select("/p:BookShop")         if (xmlNI2.Count != 0)            {                                 while ((xmlNI2.MoveNext()))              string aNode =xmlNI2.SelectSingleNode('Book', nsmgr); if (aNode =="The Book I am after")       Console.WriteLine("Found My Book);   This lengthy cumbersome task can be achieved with a simple XPATH query Console.WriteLine((xmlNavg.SelectSingleNode("/p:BookShop/Book[.='The Book I am aFter ']", nsmgr)).Value.ToString()); Use the power of the parser and eliminate the middleman C#/MSIL/JIT etc etc Get Started Fast and use the parser as Outlined 1) Open XML and goto Grid Mode 2) Select XPATH tab on the bottom viewer/window as shown  From here you get intellisense and can quickly learn how to navigate/find the data using XPATH A key component to Navigation with XPATH is to use the "../ " command . This basically says from where I am now go up 1 level . With Xpath all commands are cumalative. i.e you can search for a book title @ the 2nd level of the XML and from there traverse 15 layers to paragraphs or words on a page with expression validation occuring throughout this process etc  (So in essence you may have arrived @ a node within the XML and have met 15 conditions along the way ) Given 1-2 days with XmlSpy and XPATH you unlock a technology that is super fast and simple to use. XML is a core component to what lays under the hood of so many techs. So it is no wonder that you want to be able to goto  the atomic level to achieve the result you want Justin P.S For a long time I saw XML as slow and a bit boring but now converted

    Read the article

  • What went wrong with my curl install?

    - by Danjah
    I'm fresher than the prince to Linux, I've been following the instructions here: http://chrisfulstow.com/running-node-js-on-windows-with-virtualbox-and-ubuntu (the link tells what I am generally trying to do). I'm all up and running in VBox, and am at the curl install part, I may have done the curl part a week ago I forget. So I ran this command anyway: danjah@danjah-VirtualBox:~$ sudo apt-get install curl Result: [sudo] password for danjah: Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Then: $ curl http://npmjs.org/install.sh | sudo npm_install=rc sh Result: fetching: { gzip: stdin: unexpected end of file /bin/tar: Child returned status 1 /bin/tar: Error is not recoverable: exiting now It failed Should I be concerned? How can I test curl? How can I avoid these situations? Perhaps there's a generic way of checking to see if I've already installed packages/etc? Case specific answers and general advice most appreciated. cheers, d

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Problem with wake after suspend using USB remote.

    - by Bod
    Hi, I'm a linux newbie looking for some help. I'm currently setting up an XBMC HTPC using a laptop and 10.10 and all works great except for waking from resume using the power button on the remote. The suspend works from remote works fine as does the resume using the power button on the laptop. I've checked /proc/acpi/wakeup which initially showed the following. Device S-state Status Sysfs node C096 S5 *disabled pci:0000:00:1e.0 C0F1 S3 *disabled pci:0000:00:1d.0 C0F8 S3 *disabled pci:0000:00:1d.1 C0F9 S3 *disabled pci:0000:00:1d.2 C0FA S3 *disabled pci:0000:00:1d.3 C0FB S3 *disabled pci:0000:00:1d.7 C102 S5 *disabled pci:0000:00:1c.0 C22B S5 *disabled pci:0000:08:00.0 C115 S5 *disabled pci:0000:00:1c.2 C22C S5 *disabled C118 S5 *disabled pci:0000:00:1c.3 C22C S5 *disabled I've since configured the above so that the S3 devices above are enabled. I've confirmed that they are the correct devices using lspci 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) None of this has worked unfortunately and I'm now stuck. It simply refuses to wakeup from the remote. The USB receiver shows no activity LED while suspended. Suspend/resume from the remote works fine from Windows 7 so I know the laptop is ok with it. Any ideas? I need to get this sorted to gain Wife Approval for this system. Thanks, Bod.

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • Mixing JavaFX, HTML 5, and Bananas with the NetBeans Platform

    - by Geertjan
    The banana in the image below can be dragged. Whenever the banana is dropped, the current date is added to the viewer: What's interesting is that the banana, and the viewer that contains it, is defined in HTML 5, with the help of a JavaScript and CSS file. The HTML 5 file is embedded within the JavaFX browser, while the JavaFX browser is embedded within a NetBeans TopComponent class. The only really interesting thing is how drop events of the banana, which is defined within JavaScript, are communicated back into the Java class. Here's how, i.e., in the Java class, parse the HTML's DOM tree to locate the node of interest and then set a listener on it. (In this particular case, the event listener adds the current date to the InstanceContent which is in the Lookup.) Here's the crucial bit of code: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); final WebEngine webengine = view.getEngine(); URL url = getClass().getResource("home.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); EventTarget banana = (EventTarget) document.getElementById("banana"); banana.addEventListener("click", new MyEventListener(), true); } } }); It seems very weird to me that I need to specify "click" as a string. I actually wanted the drop event, but couldn't figure out what the arbitrary string was for that. Which is exactly why strings suck in this context. Many thanks to Martin Kavuma from the Technical University of Eindhoven, who I met today and who inspired me to go down this interesting trail.

    Read the article

  • Presentation Plugin for NetBeans IDE 7.2

    - by Geertjan
    I got some excellent help from Mark Stephens, who is from IDR Solutions, which produces JPedal. Using the LGPL version of JPedal, and code provided by Mark, it's now possible to right-click the node that appears in the Presentation Window: ...after which, using a file browser (to locate a file on disk) or a URL (a very simple check is done, the URL must start with "http" and end with "pdf"), you can now open PDF files as images (thanks to conversion from PDF to images done by JPedal) into NetBeans IDE, typically (I imagine) for presentation purposes: Note that you should consider the plugin in "alpha" state. But, despite that, I've had good results. Try it and use the URL below, as a control test (since it works fine for me), which produces the result shown above: http://edu.netbeans.org/contrib/slides/netbeans-platform/presentation-4-actions.pdf  However, for some PDFs, the plugin doesn't work, and I don't know why yet (trying to figure it out with Mark), resulting in this stack trace: java.lang.ArrayIndexOutOfBoundsException: 8 at org.jpedal.objects.acroforms.formData.SwingData.completeField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createDisplayComponentsForPage(Unknown Source) at org.jpedal.PDFtoImageConvertor.convert(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) Here's the location of the plugin, install it into NetBeans IDE 7.2; feedback is very welcome: http://plugins.netbeans.org/plugin/44525

    Read the article

  • Persisting simple tree with (Fluent-)NHibernate leads to System.InvalidCastException

    - by fudge
    Hi there, there seems to be a problem with recursive data structures and (Fluent-)NHibernate or its just me, being a complete moron... here's the tree: public class SimpleNode { public SimpleNode () { this.Children = new List<SimpleNode> (); } public virtual SimpleNode Parent { get; private set; } public virtual List<SimpleNode> Children { get; private set; } public virtual void setParent (SimpleNode parent) { parent.AddChild (this); Parent = parent; } public virtual void AddChild (SimpleNode child) { this.Children.Add (child); } public virtual void AddChildren (IEnumerable<SimpleNode> children) { foreach (var child in children) { AddChild (child); } } } the mapping: public class SimpleNodeEntity : ClassMap<SimpleNode> { public SimpleNodeEntity () { Id (x => x.Id); References (x => x.Parent).Nullable (); HasMany (x => x.Children).Not.LazyLoad ().Inverse ().Cascade.All ().KeyNullable (); } } now, whenever I try to save a node, I get this: System.InvalidCastException: Cannot cast from source type to destination type. at (wrapper dynamic-method) SimpleNode. (object,object[],NHibernate.Bytecode.Lightweight.SetterCallback) at NHibernate.Bytecode.Lightweight.AccessOptimizer.SetPropertyValues (object,object[]) at NHibernate.Tuple.Entity.PocoEntityTuplizer.SetPropertyValuesWithOptimizer (object,object[]) My setup: Mono 2.8.1 (on OSX), NHibernate 2.1.2, FluentNHibernate 1.1.0

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Moving away from PHP and running towards server-side JavaScript [on hold]

    - by Sosukodo
    I've decided to start moving away from PHP and server-side JavaScript looks like an attractive replacement. However, I'm having a hard time wrapping my head around how others are using Node.js for web applications. I'm currently using Lighttpd with FastCGI PHP. The one thing I like about PHP is that I can "inline" my scripts in the document like so: <?php echo 'Hello, World!'; ?> My question is: Is there any server-side JavaScript solution that I can use in this manor? For instance, I'd love to be able to do this: <?js print('Hello, World!'); ?> Is there such a thing? I'm not looking for opinions about "which is better". I just want to know what's out there and I'll explore each of them on my own. The important thing is that I'd like to use it like I demonstrated above. Links to the software along with implementation examples will be considered above other answers.

    Read the article

  • Using visitor pattern with large object hierarchy

    - by T. Fabre
    Context I've been using with a hierarchy of objects (an expression tree) a "pseudo" visitor pattern (pseudo, as in it does not use double dispatch) : public interface MyInterface { void Accept(SomeClass operationClass); } public class MyImpl : MyInterface { public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } } This design was, however questionnable, pretty comfortable since the number of implementations of MyInterface is significant (~50 or more) and I didn't need to add extra operations. Each implementation is unique (it's a different expression or operator), and some are composites (ie, operator nodes that will contain other operator/leaf nodes). Traversal is currently performed by calling the Accept operation on the root node of the tree, which in turns calls Accept on each of its child nodes, which in turn... and so on... But the time has come where I need to add a new operation, such as pretty printing : public class MyImpl : MyInterface { // Property does not come from MyInterface public string SomeProperty { get; set; } public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } public void Accept(SomePrettyPrinter printer) { printer.PrettyPrint(this.SomeProperty); } } I basically see two options : Keep the same design, adding a new method for my operation to each derived class, at the expense of maintainibility (not an option, IMHO) Use the "true" Visitor pattern, at the expense of extensibility (not an option, as I expect to have more implementations coming along the way...), with about 50+ overloads of the Visit method, each one matching a specific implementation ? Question Would you recommand using the Visitor pattern ? Is there any other pattern that could help solve this issue ?

    Read the article

  • Time Zone on WebLogic Server

    - by adejuanc
    In order to configure the time zone with WebLogic Server, use the following JVM startup command: -Duser.timezone=<timezone> For example, in the java arguments in the admin console at Environments -> Servers -> Servername -> - Server Start tab, configure the startup settings that Node Manager will use to start the particular server. For example: -Duser.timezone='America/Arizona' There are many different time zones, each with its own code. For a complete list please refer to : http://en.wikipedia.org/wiki/List_of_zoneinfo_time_zones For testing, you can run the following code on WLS with a JSP, servlet, or deploying the class: import java.util.Calendar; import java.util.TimeZone; public class TestTimeZone {  public static void main(String[] args) {    Calendar calendar = Calendar.getInstance();    TimeZone timeZone = calendar.getTimeZone();    System.out.println(" your Current TimeZone is : " + timeZone.getDisplayName());    System.out.println(" Time Zone id : "+ timeZone.getID());  } }

    Read the article

  • Unit testing to prove balanced tree

    - by Darrel Hoffman
    I've just built a self-balancing tree (red-black) in Java (language should be irrelevant for this question though), and I'm trying to come up with a good means of testing that it's properly balanced. I've tested all the basic tree operations, but I can't think of a way to test that it is indeed well and truly balanced. I've tried inserting a large dictionary of words, both pre-sorted and un-sorted. With a balanced tree, those should take roughly the same amount of time, but an unbalanced tree would take significantly longer on the already-sorted list. But I don't know how to go about testing for that in any reasonable, reproducible way. (I've tried doing millisecond tests on these, but there's no noticeable difference - probably because my source data is too small.) Is there a better way to be sure that the tree is really balanced? Say, by looking at the tree after it's created and seeing how deep it goes? (That is, without modifying the tree itself by adding a depth field to each node, which is just wasteful if you don't need it for anything other than testing.)

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • HTML, JavaScript, and CSS in a NetBeans Platform Application

    - by Geertjan
    I broke down the code I used yesterday, to its absolute bare minimum, and then realized I'm not using HTML 5 at all: <html> <head> <link rel="stylesheet" href="style.css" type="text/css" media="all" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js"></script> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script> <script type="text/javascript" src="script.js"></script> </head> <body> <div id="logo"> </div> <div id="infobox"> <h2 id="statustext"/> </div> </body> </html> Here's the script.js file referred to above: $(function(){ var banana = $("#logo"); var statustext = $("#statustext"); var defaulttxt = "Drag the banana!"; var dragtxt = "Dragging the banana!"; statustext.text(defaulttxt); banana.draggable({ drag: function(event, ui){ statustext.text(dragtxt); }, stop: function(event, ui){ statustext.text(defaulttxt); } }); }); And here's the stylesheet: body { background:#3B4D61 repeat 0 0; margin:0; padding:0; } h2 { color:#D1D8DF; display:block; font:bold 15px/10px Tahoma, Helvetica, Arial, Sans-Serif; text-align:center; } #infobox { position:absolute; width:300px; bottom:20px; left:50%; margin-left:-150px; padding:0 20px; background:rgba(0,0,0,0.5); -webkit-border-radius:15px; -moz-border-radius:15px; border-radius:15px; z-index:999; } #logo { position:absolute; width:450px; height:150px; top:40%; left: 30%; background:url(bananas.png) no-repeat 0 0; cursor:move; z-index:700; } However, I've replaced the content of the HTML file with a few of the samples from here, without any problem; in other words, if the HTML 5 canvas were to be needed, it could seamlessly be incorporated into my NetBeans Platform application: https://developer.mozilla.org/en/Canvas_tutorial/Basic_usage

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >