Search Results

Search found 62912 results on 2517 pages for 'data entry'.

Page 430/2517 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • QotD: Maurizio Cimadamore on Project Lambda Binary Snapshots

    - by $utils.escapeXML($entry.author)
    I'm glad to announce that the first binary snapshots of the lambda repository are available at the following URL:http://jdk8.java.net/lambda/As you can imagine, as the implementation of the compiler/libraries is still under heavy development, there are still many rough corners that need to be polished. I'd like to thank you all for all the patience and the valuable feedback provided so far - please keep it coming!Maurizio Cimadamore announcing the Project Lambda binary snapshots on the lambda-dev OpenJDK mailing list.

    Read the article

  • Store VOD wmi data in a database directly or use CQRS?

    - by JD01
    I need to collect Video on demand bandwidth usage every few minutes (or maybe ever few seconds) and store this in a database so users can produce graphs on bandwidth usage over a period of time (few hours, days, weeks or even possibly months). So the sort of data that will be stored will be the number of users watching videos, current server bandwidth (Mb/s), multicast bit rate etc. I am wondering whether using CQRS would be a good approach with Event sourcing as I can then rebuild my objects to create different projections (I.e. different graphs/reports etc) but then again it seems like I am introducing complexity which might not be needed. Or would it be best to just put the data directly in a database (currently using PostGres) directly and query off that? Having thought about it, my table is a form of audit log anyway, so I don't think I need event sourcing at all. Any thoughts?

    Read the article

  • OpenJDK In The News: Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012

    - by $utils.escapeXML($entry.author)
    The OpenJDK Community continues to host the development of the reference implementation of Java SE 8. Weekly developer preview builds of JDK 8 continue to be available from jdk8.java.net.OpenJDK continues to thrive with contributions from Oracle, as well as other companies, researchers and individuals.The OpenJDK Web Site Terms of Use was recently updated to allow work on Java Specification Requests (JSRs) for Java SE to take place in the OpenJDK Community, alongside their corresponding reference implementations, so that specification leads can satisfy the new transparency requirements of the Java Community Process (JCP 2.8).“The recent decision by the Java SE 8 Expert Group to defer modularity to Java SE 9 will allow us to focus on the highly-anticipated Project Lambda, the Nashorn JavaScript engine, the new Date/Time API, and Type Annotations, along with numerous other performance, simplification, and usability enhancements,” said Georges Saab, vice president, Software Development, Java Platform Group at Oracle. “We are continuing to increase our communication and transparency by developing the reference implementation and the Oracle-led JSRs in the OpenJDK community.”Quotes taken from the 14th press release from Oracle mentioning OpenJDK, titled "Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012".

    Read the article

  • How do you choose a programming/data structure/algorithm book?

    - by Fanatic23
    I really should not be mentioning the name of the book, but the first time I read it (during my under-grad days) I almost concluded that data structure was a bad course to pick. Which brings me to the question I am asking here. What makes a programming or data structure or algorithm book tick? Clearly, lucid explanation is one. But I also realize that organization of the material is very important and so is diagrams. What else? Some pointers would obviously help when I hang out in my neighborhood computer book shop the next time.

    Read the article

  • OpenJDK In The News: AMD and Oracle to Collaborate in the OpenJDK Community [..]

    - by $utils.escapeXML($entry.author)
    During the JavaOne™ 2012 Strategy Keynote, AMD (NYSE: AMD) announced its participation in OpenJDK™ Project “Sumatra” in collaboration with Oracle and other members of the OpenJDK community to help bring heterogeneous computing capabilities to Java™ for server and cloud environments. The OpenJDK Project “Sumatra” will explore how the Java Virtual Machine (JVM), as well as the Java language and APIs, might be enhanced to allow applications to take advantage of graphics processing unit (GPU) acceleration, either in discrete graphics cards or in high-performance graphics processor cores such as those found in AMD accelerated processing units (APUs).“Affirming our plans to contribute to the OpenJDK Project represents the next step towards bringing heterogeneous computing to millions of Java developers and can potentially lead to future developments of new hardware models, as well as server and cloud programming paradigms,” said Manju Hegde, corporate vice president, Heterogeneous Applications and Developer Solutions at AMD. “AMD has an established track record of collaboration with open-software development communities from OpenCL™ to the Heterogeneous System Architecture (HSA) Foundation, and with this initiative we will help further the development of graphics acceleration within the Java community.”“We expect our work with AMD and other OpenJDK participants in Project “Sumatra” will eventually help provide Java developers with the ability to quickly leverage GPU acceleration for better performance,” said Georges Saab, vice president, Software Development, Java Platform Group at Oracle. "We hope individuals and other organizations interested in this exciting development will follow AMD's lead by joining us in Project “Sumatra."Quotes taken from the first press release from AMD mentioning OpenJDK, titled "AMD and Oracle to Collaborate in the OpenJDK Community to Explore Heterogeneous Computing for Java ".

    Read the article

  • How to get data from a bluetooth device that is not visible?

    - by jonobacon
    I just bought a Fitbit One which includes Bluetooth 4.0 to sync with mobile devices. Currently libfitbit does not include support for bluetooth syncing, so I would like to see how much data I can get out of the device that I can pass onto the libfitbit devs so that they can explore bluetooth support. I ran: hcitool scan which unfortunately did not return any devices. I also used blueman to scan for devices and nothing was found either. Therefore I am assuming that the bluetooth radio in the device is not visible by default. Can anyone recommend any ways to get data out of the device that could be helpful?

    Read the article

  • QotD: Eben Upton on Raspberry Pi Model B Shipping With 512MB of RAM

    - by $utils.escapeXML($entry.author)
    One of the most common suggestions we’ve heard since launch is that we should produce a more expensive “Model C” version of Raspberry Pi with extra RAM. This would be useful for people who want to use the Pi as a general-purpose computer, with multiple large applications running concurrently, and would enable some interesting embedded use cases (particularly using Java) which are slightly too heavyweight to fit comfortably in 256MB.The downside of this suggestion for us is that we’re very attached to $35 as our highest price point. With this in mind, we’re pleased to announce that from today all Model B Raspberry Pis will ship with 512MB of RAM as standard.Eben Upton, a founder and trustee of the Raspberry Pi foundation, in a blog post announcing the change.

    Read the article

  • Dashboard to aggregate Google Analytics, Facebook, YouTube etc tracking data?

    - by Richard
    I'd like to see as much tracking data as possible about my online presence, in one single dashboard - so views/conversions from Google Analytics data, the performance of my Facebook campaigns via the Insights API, views/clicks from my YouTube campaigns, etc. This could be as simple as a graph with time on the x-axis, and key indicators from each source on the y-axis (conversions from Analytics, likes on Facebook, views on YouTube, etc). The idea is that I can see customer engagement with each source, over time. I can write my own such dashboard easily enough, but I wondered if there was something off-the-shelf that already did this. Apologies if this isn't the right forum for such a question - would appreciate tips for the best place to ask.

    Read the article

  • Food For Tests: 7u12 Build b05, 8 with Lambda Preview b68

    - by $utils.escapeXML($entry.author)
    This week brought along new developer preview releases of the JDK and related projects. On the JDK 7 side, the Java™ Platform, Standard Edition 7 Update 12 Developer Preview Releases have been updated to 7u12 Build b05. On the JDK 8 side, as Mike Duigou announced on the lambda-dev mailing list, A new promotion (b68) of preview binaries for OpenJDK Java 8 with lambda extensions is now available at http://jdk8.java.net/lambda/. Happy testing!

    Read the article

  • Vertex data split into separate buffers or one one structure?

    - by kiba2
    Is it better to have all vertex data in one structure like this: class MyVertex { int x,y,z; int u,v; int normalx, normaly, normalz; } Or to have each component (location, normal, texture coordinates) in separate arrays/buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex: the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Does everybody normally keep them separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each triangle vertex (They won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non-indexed buffers in the same VBO?

    Read the article

  • How can I make zaz save its profile data?

    - by RolandiXor - The Ice Man
    I've been playing Zaz recently as a time waster and stress beater, but it seems not to be actively maintained, and is not saving profile data under Ubuntu 12.10. It's getting to be more stressful than fun, because it keeps crashing, and under Unity, Gnome Shell, KDE (in other words, any opengl enabled WM) it makes the GPU lockup. How can I make it save the profile data or create a profile that I can manually place my level info in? I'm tired of playing the same levels over and over and not being able to start from the ones I've already passed. I am still yet to find any info on fixing this. Any clues?

    Read the article

  • Iomega Home Media Network Hard Drive 1TB Cloud Edition failed, data recovery?

    - by lonbon69
    I have a Iomega Home Media Network Drive Cloud Edition 1TB that started clicking and then displayed a failure LED code Power LED and Red LED. I removed the SATA drive and inserted in a 'All in 1 HDD Docking Station' and connected to laptop by USB - Laptop has Win 7 OS. The dock is seen as drive E but cannot access and says 0% data etc. The drive does spin up when I power the dock. Web searches say the drive has EXT3 file system and to use Ubuntu to access drive. I have now setup a dual boot laptop but still do not see the drive using ubuntu. Is there something else I need to do to get it recognised etc. I really would like to recover the data, any suggestions please?

    Read the article

  • UK Data Breaches Up by 10 fold in 10 years.

    - by TATWORTH
    At http://www.v3.co.uk/v3-uk/news/2201863/uk-data-breaches-rocket-by-1-000-percent-over-past-five-years there is an interesting report on the increase in data breaches reported in the UK.A lot of this increase may simply a change in legislation that has made reporting a statutory obligation.Some questions to ask yourself:Are server logs checked for untoward activity?Do you have a reporting policy if something is amiss?Did you design security in for the start of your application design?Do you log for example failed logons?Do you run tools to check for code integrity?Is my defense, a strategy of defense in depth?Do you realise that 60% of hack attacks are internal?Whilst SQL Injection is a problem that affects practically all application code platforms, within Microsoft Applications do you run FXCOP? Do you run any of the other free tools for checking?

    Read the article

  • Is a blob more efficient than a varchar for data that can be ANY size?

    - by BillyNair
    When setting up a database I want to use the most efficient data type for potentially fairly long data. Currently my project is to store song titles and thoughts pertaining to that song. Some titles might be 5 characters or longer than 100 characters and the thoughts could run pretty long. Is it more efficient to use a varchar set to 8000 or to use a blob? Is using a blob the same as a varchar, in that there is a set size it is allocated regardless of what it holds? or is it just a pointer and it doesn't really use much space on the table? Is there a certain set size of a blob in KB or is it expandable?

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • How can I fix a very broken Ubuntu installation without losing data?

    - by jredkai
    Okay guys, I was installing a program (I do not remember the name). When I did sudo apt-get update I was given missing dependencies. It told me to sudo apt-get install -f which deleted just about every dependency needed for Ubuntu, now I cannot log in or anything, now in GRUB it actually says Debian instead of Ubuntu. I have tons of important data in that partition. Can I some how use the live cd to fix this problem??? I mean like re-install without losing data. Any help is greatly appreciated!!!

    Read the article

  • Does the direction of storage make us bad data citizens?

    - by simonsabin
      My career started at a company where we hardly had email, the network was a 10base2 affair with cables running all around the office. You used floppy disks and the thought of a GB of data was absurd. You had to look after every byte and only keep what you really needed. Whilst the cost of the spinning disks gradually falls the cost and size of flash storage continues to plummet. The new Crucial SSD is £380 for 1TB I can now keep 128GB of data on a SD card the size of my finger. It only costs...(read more)

    Read the article

  • OpenJDK In the News: Oracle Outlines Plans to Make the Future Java During JavaOne 2012 [..]

    - by $utils.escapeXML($entry.author)
    Phil Rogers, AMD Corporate Fellow and HSA Foundation President, joined Oracle on stage to discuss Project Sumatra, which was recently approved in the OpenJDK Community. Project Sumatra will explore how Java can be extended to support heterogeneous computing models for improved performance and power consumption.Oracle plans to propose Project Nashorn, a new JavaScript engine for the Java Virtual Machine (JVM), later this year in the OpenJDK Community. Oracle expects to enhance Project Nashorn with the support of several other OpenJDK Community contributors, including IBM, Red Hat and Twitter.The OpenJDK Community continues to host the development of the reference implementation of Java SE 8. Weekly developer preview builds of JDK 8 continue to be available from jdk8.java.net.Quotes taken from the 13th press release from Oracle mentioning OpenJDK, titled "Oracle Outlines Plans to Make the Future Java During JavaOne 2012 Strategy Keynote".

    Read the article

  • Downgrading Mercurial in MacPorts

    - by $utils.escapeXML($entry.author)
    Another Mercurial release, another broken extension. Mercurial 2.3 breaks hgforest ... once more. Of course, with open source, the notion of backwards compatibility is sometimesoften left as an exercise for the curious readers of said source code, so until someone gets around to fix up hgforest ... once more, to keep up with Mercurial's churn, one way to get hgforest working again is to downgrade to Mercurial 2.2.3, for example. In MacPorts, assuming you have installed Mercurial 2.2.3 before, and it was updated to the broken Mercurial 2.3 version, it's pretty easy to get back to a working state: sudo port deactivate [email protected]_1sudo port activate [email protected]_0

    Read the article

  • How to visualize real time data on Android? [closed]

    - by matarsak
    I want to build and android app that visualizes real time data (2D animation). I set up a UDP channel that get the data, now I want to visualize it. I know that I can use OpenGL ES, but after a few weeks, I dont think that I'm able to learn that. What about Android Processing? Could it be used for an extensive visualization task like this? or is it limited in some way? I've heard it's not hard learn. Any other options?

    Read the article

  • how to send put request with data as an xml element, from JavaScript ?

    - by Sarang
    Hi everyone, My data is an xml element & I want send PUT request with JavaScript. How do I do this ? For reference : Update Cell As per fredrik suggested, I did this : function submit(){ var xml = "<entry>" + "<id>https://spreadsheets.google.com/feeds/cells/0Aq69FHX3TV4ndDBDVFFETUFhamc5S25rdkNoRkd4WXc/od6/private/full/R2C1</id>" + "<link rel=\"edit\" type=\"application/atom+xml\"" + "href=\"https://spreadsheets.google.com/feeds/cells/0Aq69FHX3TV4ndDBDVFFETUFhamc5S25rdkNoRkd4WXc/worksheetId/private/full/R2C1\"/>" + "<gs:cell row=\"2\" col=\"1\" inputValue=\"300\"/>" + "</entry>"; document.getElementById('submitForm').submit(xml); } </script> </head> <body> <form id="submitForm" method="put" action="https://spreadsheets.google.com/feeds/cells/0Aq69FHX3TV4ndDBDVFFETUFhamc5S25rdkNoRkd4WXc/od6/private/full/R2C1"> <input type="submit" value="submit" onclick="submit()"/> </form> However, it doesn't write back but positively it returns xml file like : <?xml version='1.0' encoding='UTF-8'?> <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gs='http://schemas.google.com/spreadsheets/2006' xmlns:batch='http://schemas.google.com/gdata/batch'> <id>https://spreadsheets.google.com/feeds/cells/0Aq69FHX3TV4ndDBDVFFETUFhamc5S25rdkNoRkd4WXc/od6/private/full/R2C1</id> <updated>2011-01-11T07:35:09.767Z</updated> <category scheme='http://schemas.google.com/spreadsheets/2006' term='http://schemas.google.com/spreadsheets/2006#cell'/> <title type='text'>A2</title> <content type='text'></content> <link rel='self' type='application/atom+xml' href='https://spreadsheets.google.com/feeds/cells/0Aq69FHX3TV4ndDBDVFFETUFhamc5S25rdkNoRkd4WXc/od6/private/full/R2C1'/> <link rel='edit' type='application/atom+xml' href='https://spreadsheets.google.com/feeds/cells/0Aq69FHX3TV4ndDBDVFFETUFhamc5S25rdkNoRkd4WXc/od6/private/full/R2C1/1ekg'/> <gs:cell row='2' col='1' inputValue=''></gs:cell> </entry> Any further solution for the same ?

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >