Search Results

Search found 1752 results on 71 pages for 'scala macros'.

Page 68/71 | < Previous Page | 64 65 66 67 68 69 70 71  | Next Page >

  • jrunscript as a cross platform scripting environment

    - by user12798506
    ?????????????????????????????????????????????????????????sh????????????UNIX???????????????????sh???????????????????????????????????????????Windows????????????????? sh??????????????find?grep?sed?awk???Windows??????????????????????????????????????????????????????????????????????????????????????????????Windows???Cygwin????????????sh??????Windows??????????????Cygwin????????????????????????????????????????????JDK?????jrunscript?????JavaScript???????????????????????1?????????jrunscript??????????????????? Windows???UNIX??????????????????????? find?grep?sed?awk?????????sh???????????????Windows Script Host??????? Java????????????? ??????????????????????????????????????????????????????????(?????????????????????????????????????????) ?????????????JDK 6??????????????????????????PC????????????????JDK 6?PC????????????????????????????????????JDK????????????????????????????????????????jrunscript?????????????????????????? ?????jrunscript????JavaScript?????????????????????????????????????????? 1) Windows???UNIX????????????????? ?????????????????????????????????????????JavaScript???mytool.js???????????????????????jrunscript???????????UNIX????sh???????Windows????bat????????????????????? mytool.sh (UNIX?): #!/bin/sh bindir=$(cd $(dirname $0) && pwd) case "`uname`" in CYGWIN*) bindir=`cygpath -w "$bindir"` ;; esac jrunscript "${bindir}/mytool.js" $* mytool.bat (Windows?): @echo off set bindir=%~dp0 jrunscript "%bindir%mytool.js" %* UNIX??sh????????Cygwin???????????????????????????????????????????js??????????????UNIX?Windows??????????????????????????????? 2) jrunscript??cat, cp, find?grep?????? jrunscript???UNIX?????????????????????????????????? jrunscript JavaScript built-in functions ????UNIX??sh?????????????????????UNIX?????????????????????????????????????????src??????????java????????????enum???????java?????????????????????????????????????????????? find('src', '.*.java', function(f) { grep('enum', f); }); ???????UNIX?????????????????????????????????????????????????????????????????????????????????????????cp(from, to)??????????????????????????????????????????UNIX??????????? $ cp -r src/* tmp/ ?????????????????????????????????????????find()???????cp -r????????·?????????????????????? function cpr(fromdir, todir, pattern) { if (pattern == undefined) { pattern = ".*"; } var frdir = pathToFile(fromdir).getCanonicalPath(); find(fromdir, pattern, function(f) { // relative dir of file f from 'fromdir'. var relative = f.getParentFile().getCanonicalPath().substring(frdir.length() + 1); var dstdir = pathToFile(todir + "/" + relative); if (!dstdir.exists()) { // Create the destination dir for file f. mkdirs(dstdir); } // Copy file f to 'dstdir'. cp(f, dstdir + "/" + f.getName()); }); } java?????I/O?API??Windows?????????????"/"??????????????????????????????UNIX?Windows?????????????? ????????????exec(cmd)?????????jar???????????????????????????????????????????? $ jrunscript js> exec("jar xvf example.jar") META-INF/ ?????????????µ???B META-INF/MANIFEST.MF ???W?J???????µ???B com/ ?????????????µ???B com/example/ ?????????????µ???B com/example/Bar.class ???W?J???????µ???B com/example/dummy/ ?????????????µ???B com/example/dummy/dummy.txt ?????o???????µ???B com/example/dummy.properties ?????o???????µ???B com/example/Foo.class ???W?J???????µ???B ???exec()?????????????????????????????????????????????????????????????????Windows????????????I/O??????????????????????????????????BAT????????? errmsg.bat: for /L %%i in (1,1,50) do echo "Error Message count = %%i" 1&2 jrunscript??exec()???????????????18??????????????????????????????????? C:\tmp>jrunscript -e "exec('errmsg.bat')" C:\tmp>for /L %i in (1 1 100) do echo "Error Message count = %i" 1>&2 C:\tmp>echo "Error Message count = 1" 1>&2 : C:\tmp>echo "Error Message count = 18" 1>&2 ? ??? ???????????exec()?????????????????????????????????????????????????????????????????DataInputStream???????????????????????? $ jrunscript js this["exec"].toString() function exec(cmd) { var process = java.lang.Runtime.getRuntime().exec(cmd); var inp = new DataInputStream(process.getInputStream()); var line = null; while ((line = inp.readLine()) != null) { println(line); } process.waitFor(); $exit = process.exitValue(); } ?????????????????????????????????????????????????????exec()???????????????exec()?????????????????????????????exec()??????? function exec(cmd) { var process = java.lang.Runtime.getRuntime().exec(cmd); var stdworker = new java.lang.Runnable( {run: function() { cat(process.getInputStream()); }}); var errworker = new java.lang.Runnable( {run: function() { cat(process.getErrorStream()); }}); new java.lang.Thread(stdworker).start(); new java.lang.Thread(errworker).start(); return proc.waitFor(); } ???????????????????cat()???????????cat()?InputStreamReader?????????????????????????????????????????????????? 3) JavaScript???????????????? JavaScript?Java???????????????????????JavaScript????????????Ruby?Groovy?Scala???????????????????????????????????????????????10MB?????????????????????????????????????JavaScript????????????????????KB?????????????MB?JAR??????????????????????????JRE?JDK?????????????????????????????????????????

    Read the article

  • Play in NetBeans IDE (Part 2)

    - by Geertjan
    Peter Hilton was one of many nice people I met for the first time during the last few days constituting JAX London. He did a session today on the Play framework which, if I understand it correctly, is an HTML5 framework. It doesn't use web.xml, Java EE, etc. It uses Scala internally, as well as in its templating language.  Support for Play would, I guess, based on the little I know about it right now, consist of extending the HTML5 application project, which is new in NetBeans IDE 7.3. The workflow I imagine goes as follows. You'd create a new HTML5 application project, at which point you can choose a variety of frameworks and templates (Coffee Script, Angular, etc), which comes out of the box with the HTML5 support (i.e., Project Easel) in NetBeans IDE 7.3. Then, once the project is created, you'll right-click it and go to the Project Properties dialog, where you'll be able to enable Play support: At this stage, i.e., when you've checked the checkbox above and then clicked OK, all the necessary Play files will be added to your project, e.g., the routes file and the application.conf, for example. And then you have a Play application. Creating support in this way entails nothing more than creating a module that looks like this, i.e., with one Java class, where even the layer.xml file below is superfluous: All the code in the PlayEnablerPlanel.java that you see above is as follows: import java.awt.BorderLayout; import javax.swing.JCheckBox; import javax.swing.JComponent; import javax.swing.JPanel; import org.netbeans.spi.project.ui.support.ProjectCustomizer; import org.netbeans.spi.project.ui.support.ProjectCustomizer.Category; import org.openide.util.Lookup; public class PlayEnablerPanel implements ProjectCustomizer.CompositeCategoryProvider {     @ProjectCustomizer.CompositeCategoryProvider.Registration(             projectType = "org.netbeans.modules.web.clientproject",             position = 1000)     public static PlayEnablerPanel enablePlay() {         return new PlayEnablerPanel();     }     @Override     public Category createCategory(Lookup lkp) {         return ProjectCustomizer.Category.create("Play Framework", "Configure Play", null);     }     @Override     public JComponent createComponent(Category ctgr, Lookup lkp) {         JPanel playPanel = new JPanel(new BorderLayout());         playPanel.add(new JCheckBox("Enable Play"), BorderLayout.NORTH);         return playPanel;     } } Looking forward to having a beer with Peter soon (he lives not far away, in Rotterdam) to discuss this! Also read Part 1 of this series, which I wrote some time ago, and which has other ideas and considerations.

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • Microsoft Office 2003 document (Excel and Word) intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during document load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All samples documents are located on a local drive (C:\BPI); The no document has has macros and have any addins usage; The problem does occurs on others files extensions like .PDF, for example; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different documents (.xls and .doc), all of them smaller than 30 KB; I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbook or word documents; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Blog Buzz - Devoxx 2011

    - by Janice J. Heiss
    Some day I will make it to Devoxx – for now, I’m content to vicariously follow the blogs of attendees and pick up on what’s happening.  I’ve been doing more blog "fishing," looking for the best commentary on 2011 Devoxx. There’s plenty of food for thought – and the ideas are not half-baked.The bloggers are out in full, offering useful summaries and commentary on Devoxx goings-on.Constantin Partac, a Java developer and a member of Transylvania JUG, a community from Cluj-Napoca/Romania, offers an excellent summary of the Devoxx keynotes. Here’s a sample:“Oracle Opening Keynote and JDK 7, 8, and 9 Presentation•    Oracle is committed to Java and wants to provide support for it on any device.•    JSE 7 for Mac will be released next week.•    Oracle would like Java developers to be involved in JCP, to adopt a JSR and to attend local JUG meetings.•    JEE 7 will be released next year.•    JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch.•    JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18    month release cycle.•    The main features included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding   support for sensors. JSE 9 probably will focus on some of these features:1.    self tuning JVM2.    improved native language integration3.    processing enhancement for big data4.    reification (adding runtime class type info for generic types)5.    unification of primitive and corresponding object classes6.    meta-object protocol in order to use type and methods define in other JVM languages7.    multi-tenancy8.    JVM resource management” Thanks Constantin! Ivan St. Ivanov, of SAP Labs Bulgaria, also commented on the keynotes with a different focus.  He summarizes Henrik Stahl’s look ahead to Java SE 8 and JavaFX 3.0; Cameron Purdy on Java EE and the cloud; celebrated Java Champion Josh Bloch on what’s good and bad about Java; Mark Reinhold’s quick look ahead to Java SE 9; and Brian Goetz on lambdas and default methods in Java SE 8. Here’s St. Ivanov’s account of Josh Bloch’s comments on the pluses of Java:“He started with the virtues of the platform. To name a few:    Tightly specified language primitives and evaluation order – int is always 32 bits and operations are executed always from left  to right, without compilers messing around    Dynamic linking – when you change a class, you need to recompile and rebuild just the jar that has it and not the whole application    Syntax  similarity with C/C++ – most existing developers at that time felt like at home    Object orientations – it was cool at that time as well as functional programming is today    It was statically typed language – helps in faster runtime, better IDE support, etc.    No operator overloading – well, I’m not sure why it is good. Scala has it for example and that’s why it is far better for defining DSLs. But I will not argue with Josh.”It’s worth checking out St. Ivanov’s summary of Bloch’s views on what’s not so great about Java as well. What's Coming in JAX-RS 2.0Marek Potociar, Principal Software Engineer at Oracle and currently specification lead of Java EE RESTful web services API (JAX-RS), blogged on his talk about what's coming in JAX-RS 2.0, scheduled for final release in mid-2012.  Here’s a taste:“Perhaps the most wanted addition to the JAX-RS is the Client API, that would complete the JAX-RS story, that is currently server-side only. In JAX-RS 2.0 we are adding a completely interface-based and fluent client API that blends nicely in with the existing fluent response builder pattern on the server-side. When we started with the client API, the first proposal contained around 30 classes. Thanks to the feedback from our Expert Group we managed to reduce the number of API classes to 14 (2 of them being exceptions)! The resulting is compact while at the same time we still managed to create an API that reflects the method invocation context flow (e.g. once you decide on the target URI and start setting headers on the request, your IDE will not try to offer you a URI setter in the code completion). This is a subtle but very important usability aspect of an API…” Obviously, Devoxx is a great Java conference, one that is hitting this year at a time when much is brewing in the platform and beginning to be anticipated.

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • Session Report - Modern Software Development Anti-Patterns

    - by Janice J. Heiss
    In this standing-room-only session, building upon his 2011 JavaOne Rock Star “Diabolical Developer” session, Martijn Verburg, this time along with Ben Evans, identified and explored common “anti-patterns” – ways of doing things that keep developers from doing their best work. They emphasized the importance of social interaction and team communication, along with identifying certain psychological pitfalls that lead developers astray. Their emphasis was less on technical coding errors and more how to function well and to keep one’s focus on what really matters. They are the authors of the highly regarded The Well-Grounded Java Developer and are both movers and shakers in the London JUG community and on the Java Community Process. The large room was packed as they gave a fast-moving, witty presentation with lots of laughs and personal anecdotes. Below are a few of the anti-patterns they discussed.Anti-Pattern One: Conference-Driven DeliveryThe theme here is the belief that “Real pros hack code and write their slides minutes before their talks.” Their response to this anti-pattern is an expression popular in the military – PPPPPP, which stands for, “Proper preparation prevents piss-poor performance.”“Communication is very important – probably more important than the code you write,” claimed Verburg. “The more you speak in front of large groups of people the easier it gets, but it’s always important to do dry runs, to present to smaller groups. And important to be members of user groups where you can give presentations. It’s a great place to practice speaking skills; to gain new skills; get new contacts, to network.”They encouraged attendees to record themselves and listen to themselves giving a presentation. They advised them to start with a spouse or friends if need be. Learning to communicate to a group, they argued, is essential to being a successful developer. The emphasis here is that software development is a team activity and good, clear, accessible communication is essential to the functioning of software teams. Anti-Pattern Two: Mortgage-Driven Development The main theme here was that, in a period of worldwide recession and economic stagnation, people are concerned about keeping their jobs. So there is a tendency for developers to treat knowledge as power and not share what they know about their systems with their colleagues, so when it comes time to fix a problem in production, they will be the only one who knows how to fix it – and will have made themselves an indispensable cog in a machine so you cannot be fired. So developers avoid documentation at all costs, or if documentation is required, put it on a USB chip and lock it in a lock box. As in the first anti-pattern, the idea here is that communicating well with your colleagues is essential and documentation is a key part of this. Social interactions are essential. Both Verburg and Evans insisted that increasingly, year by year, successful software development is more about communication than the technical aspects of the craft. Developers who understand this are the ones who will have the most success. Anti-Pattern Three: Distracted by Shiny – Always Use the Latest Technology to Stay AheadThe temptation here is to pick out some obscure framework, try a bit of Scala, HTML5, and Clojure, and always use the latest technology and upgrade to the latest point release of everything. Don’t worry if something works poorly because you are ahead of the curve. Verburg and Evans insisted that there need to be sound reasons for everything a developer does. Developers should not bring in something simply because for some reason they just feel like it or because it’s new. They recommended a site run by a developer named Matt Raible with excellent comparison spread sheets regarding Web frameworks and other apps. They praised it as a useful tool to help developers in their decision-making processes. They pointed out that good developers sometimes make bad choices out of boredom, to add shiny things to their CV, out of frustration with existing processes, or just from a lack of understanding. They pointed out that some code may stay in a business system for 15 or 20 years, but not all code is created equal and some may change after 3 or 6 months. Developers need to know where the code they are contributing fits in. What is its likely lifespan? Anti-Pattern Four: Design-Driven Design The anti-pattern: If you want to impress your colleagues and bosses, use design patents left, right, and center – MVC, Session Facades, SOA, etc. Or the UML modeling suite from IBM, back in the day… Generate super fast code. And the more jargon you can talk when in the vicinity of the manager the better.Verburg shared a true story about a time when he was interviewing a guy for a job and asked him what his previous work was. The interviewee said that he essentially took patterns and uses an approved book of Enterprise Architecture Patterns and applied them. Verburg was dumbstruck that someone could have a job in which they took patterns from a book and applied them. He pointed out that the idea that design is a separate activity is simply wrong. He repeated a saying that he uses, “You should pay your junior developers for the lines of code they write and the things they add; you should pay your senior developers for what they take away.”He explained that by encouraging people to take things away, the code base gets simpler and reflects the actual business use cases developers are trying to solve, as opposed to the framework that is being imposed. He told another true story about a project to decommission a very long system. 98% of the code was decommissioned and people got a nice bonus. But the 2% remained on the mainframe so the 98% reduction in code resulted in zero reduction in costs, because the entire mainframe was needed to run the 2% that was left. There is an incentive to get rid of source code and subsystems when they are no longer needed. The session continued with several more anti-patterns that were equally insightful.

    Read the article

  • When -exactly- does the Rails3 application get initialized?

    - by bergyman
    I've been fighting left and right with rails 3 and bundler. There are a few gems out there that don't work properly if the rails application hasn't been loaded yet. factory_girl and shoulda are both examples, even on the rails3 branch. Taking shoulda as an example, when trying to run rake test:units I get the following error: DEPRECATION WARNING: RAILS_ROOT is deprecated! Use Rails.root instead. (called from autoload_macros at c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:40) c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'join': can't convert #<Class:0x232b7c0> into String (TypeError) from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'block in autoload_macros' from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'map' from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/autoload_macros.rb:44:in 'autoload_macros' from c:/code/test_harness/vendor/windows_gems/gems/shoulda-2.10.3/lib/shoulda/rails.rb:17:in '<top (required)>' Digging a bit deeper into lib/shoulda/rails, I see this: root = if defined?(Rails.root) && Rails.root Rails.root else RAILS_ROOT end # load in the 3rd party macros from vendorized plugins and gems Shoulda.autoload_macros root, File.join("vendor", "{plugins,gems}", "*") So...what's happening here is while Rails.root is defined, Rails.root == nil, so RAILS_ROOT is used, and RAILS_ROOT==nil, which is then being passed on to Shoulda.autoload_macros. Obviously the rails app has yet to be initialized. With Rails3 using Bundler now, there's been some hubub over on the Bundler side about being able to specify an order in which the gems are required, but I'm not sure whether or not this would solve the problem at hand. Ultimately my questions is this: When exactly does the environment.rb file (which actually initializes the application) get pulled in? Is there any harm to bumping up when the app is initialized and have it happen before the Bundler.require line in config/application.rb? I've tried to hack bundler to specify the order myself, and have the rails gem pulled in first, but it doesn't appear to me that requiring the rails gem actually initializes the application. As this line (in config/application.rb) is being called before the app is initialized, any gem in the bundler Gemfile that requires rails to be initialized is going to tank. # Auto-require default libraries and those for the current Rails environment. Bundler.require :default, Rails.env

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • jquerymobile conflict with autocomplete : $this.attr("href") is undefined

    - by sweets-BlingBling
    When I use jquery ui autocomplete version 1.8.5 with jquery mobile alpha 2. I get an error when I click an item from the autocomplete list: $this.attr("href") is undefined. Does anyone know any fix for it? EDITED: <html> <head> <link rel="stylesheet" type="text/css" href="css/ui-lightness/jquery-ui-1.8.custom.css"> <link rel="stylesheet" type="text/css" href="css/autocomplete.css"> </head> <body> <div id="formWrap"> <form id="messageForm" action="#"> <fieldset> <label id="toLabel">select:</label> <div id="friends" class="ui-helper-clearfix"> <input id="to" type="text"> </div> </fieldset> </form> </div> <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="js/jquery.mobile-1.0a2.js"></script> <script type="text/javascript" src="js/jquery-ui-1.8.custom.min.js"></script> <script type="text/javascript"> $(function(){ var availableTags = [ "ActionScript", "AppleScript", "Asp", "BASIC", "C", "C++", "Clojure", "COBOL", "ColdFusion", "Erlang", "Fortran", "Groovy", "Haskell", "Java", "JavaScript", "Lisp", "Perl", "PHP", "Python", "Ruby", "Scala", "Scheme" ]; //attach autocomplete $("#to").autocomplete({ source:availableTags , //define select handler select: function(e, ui) { var contact = ui.item.value; createSpan(contact); $("#to").val("").css("top", 2); return false; } }); }); function createSpan(contact){ //create formatted friend span = $("<span>").text(contact) //add contact to contact div span.insertBefore("#to"); } </script> </body> </html>

    Read the article

  • error C2065: 'CComQIPtr' : undeclared identifier

    - by Ken Smith
    I'm still feeling my way around C++, and am a complete ATL newbie, so I apologize if this is a basic question. I'm starting with an existing VC++ executable project that has functionality I'd like to expose as an ActiveX object (while sharing as much of the source as possible between the two projects). I've approached this by adding an ATL project to the solution in question, and in that project have referenced all the .h and .cpp files from the executable project, added all the appropriate references, and defined all the preprocessor macros. So far so good. But I'm getting a compiler error in one file (HideDesktop.cpp). The relevant parts look like this: #include "stdafx.h" #define WIN32_LEAN_AND_MEAN #include <Windows.h> #include <WinInet.h> // Shell object uses INTERNET_MAX_URL_LENGTH (go figure) #if _MSC_VER < 1400 #define _WIN32_IE 0x0400 #endif #include <atlbase.h> // ATL smart pointers #include <shlguid.h> // shell GUIDs #include <shlobj.h> // IActiveDesktop #include "stdhdrs.h" struct __declspec(uuid("F490EB00-1240-11D1-9888-006097DEACF9")) IActiveDesktop; #define PACKVERSION(major,minor) MAKELONG(minor,major) static HRESULT EnableActiveDesktop(bool enable) { CoInitialize(NULL); HRESULT hr; CComQIPtr<IActiveDesktop, &IID_IActiveDesktop> pIActiveDesktop; // <- Problematic line (throws errors 2065 and 2275) hr = pIActiveDesktop.CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER); if (!SUCCEEDED(hr)) { return hr; } COMPONENTSOPT opt; opt.dwSize = sizeof(opt); opt.fActiveDesktop = opt.fEnableComponents = enable; hr = pIActiveDesktop->SetDesktopItemOptions(&opt, 0); if (!SUCCEEDED(hr)) { CoUninitialize(); // pIActiveDesktop->Release(); return hr; } hr = pIActiveDesktop->ApplyChanges(AD_APPLY_REFRESH); CoUninitialize(); // pIActiveDesktop->Release(); return hr; } This code is throwing the following compiler errors: error C2065: 'CComQIPtr' : undeclared identifier error C2275: 'IActiveDesktop' : illegal use of this type as an expression error C2065: 'pIActiveDesktop' : undeclared identifier The two weird bits: (1) CComQIPtr is defined in atlcomcli.h, which is included in atlbase.h, which is included in HideDesktop.cpp; and (2) this file is only throwing these errors when it's referenced in my new ATL/AX project: it's not throwing them in the original executable project, even though they have basically the same preprocessor definitions. (The ATL AX project, naturally enough, defines _ATL_DLL, but I can't see where that would make a difference.) My current workaround is to use a normal "dumb" pointer, like so: IActiveDesktop *pIActiveDesktop; HRESULT hr = ::CoCreateInstance(CLSID_ActiveDesktop, NULL, // no outer unknown CLSCTX_INPROC_SERVER, IID_IActiveDesktop, (void**)&pIActiveDesktop); And that works, provided I remember to release it. But I'd rather be using the ATL smart stuff. Any thoughts?

    Read the article

  • Using WSH (VBS) with iMacros - how do they do it?

    - by Carl
    (iMacros For Firefox 6.6.5.0; Firefox 3.6.3; Windows XP Pro SP3 w/all updates) I made an iMacro to select "load next 25" (comments) on a web page (CNN.COM). Unfortunately, iMacros doesn't appear to do looping (do the above until that string doesn't appear on the page anymore - i.e. all the comments are loaded). I tried putting {!iloop} in the TAG command, and it didn't work - then I read it wouldn't. So I tried the example at http://wiki.imacros.net/Loop_after_Query_or_Login I can't find any information on how to actually run the script in the above example. I searched Google and found VBS scripting is handled with .wsh files with Windows XP Pro. (The examples and other references there say Windows does VBS natively, so I looked up how with Google.) So I made the following .wsh file (modified the above example): Option Explicit Dim iim1, iret 'initialize iMacros instance set iim1 = CreateObject ("imacros") iret = iim1.iimInit() do while not iret < 0 iret = iim1.iimPlay("Load All CNN Comments") loop ' tell user we're done msgbox "End." ' exit iMacros instance and quit script iret = iim1.iimExit() Wscript.Quit() Here's the iMacro: (Load All CNN Comments.iim) VERSION BUILD=6650406 RECORDER=FX TAG POS=1 TYPE=A ATTR=TXT:Load<SP>next<SP>25 WAIT SECONDS=#DOWNLOADCOMPLETE# The iMacro works by itself - I press Play (left iMacro panel) and the next 25 comments load on the CNN.com page in the current tab. I put the .wsh file in the ...\iMacros\Macros directory - with the iMacro "Load All CNN Comments.iim" When I run the .wsh file (by just double clicking on it's icon - I created it with Notepad, and Windows gave it an icon for that file type - it's executable) I get the message from "Windows Script Host" - "There is no script file specified." I wasn't actually expecting it to work, as I don't see how Windows would know to call iMacros to run the iim macro. It would be nice if there was a simple, COMPLETE, example of how to use a VBS script with iMacros, that isn't bogged down with unnecessary complication like filling in a form, loading multiple pages, etc. I can't find ANY example. So what do I need to do to get this to work? I just installed iMacros yesterday, because I am constantly having the problem that there are hundred of comments after a CNN.com article, and loading 25 more at a time until they are all on the page makes it impractical to read any replies to my comments. It would also be nice if I could run the Macro from Firefox, rather than by double clicking on some file somewhere. Thanks for any help.

    Read the article

  • Parsing language for both binary and character files

    - by Thorsten S.
    The problem: You have some data and your program needs specified input. For example strings which are numbers. You are searching for a way to transform the original data in a format you need. And the problem is: The source can be anything. It can be XML, property lists, binary which contains the needed data deeply embedded in binary junk. And your output format may vary also: It can be number strings, float, doubles.... You don't want to program. You want routines which gives you commands capable to transform the data in a form you wish. Surely it contains regular expressions, but it is very good designed and it offers capabilities which are sometimes much more easier and more powerful. Something like a super-grep which you can access (!) as program routines, not only as tool. It allows: joining/grouping/merging of results inserting/deleting/finding/replacing write macros which allows to execute a command chain repeatedly meta-grouping (lists-tables-hypertables) Example (No, I am not looking for a solution to this, it is just an example): You want to read xml strings embedded in a binary file with variable length records. Your tool reads the record length and deletes the junk surrounding your text. Now it splits open the xml and extracts the strings. Being Indian number glyphs and containing decimal commas instead of decimal points, your tool transforms it into ASCII and replaces commas with points. Now the results must be stored into matrices of variable length....etc. etc. I am searching for a good language / language-design and if possible, an implementation. Which design do you like or even, if it does not fulfill the conditions, wouldn't you want to miss ? EDIT: The question is if a solution for the problem exists and if yes, which implementations are available. You DO NOT implement your own sorting algorithm if Quicksort, Mergesort and Heapsort is available. You DO NOT invent your own text parsing method if you have regular expressions. You DO NOT invent your own 3D language for graphics if OpenGL/Direct3D is available. There are existing solutions or at least papers describing the problem and giving suggestions. And there are people who may have worked and experienced such problems and who can give ideas and suggestions. The idea that this problem is totally new and I should work out and implement it myself without background knowledge seems for me, I must admit, totally off the mark.

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • C++ Virtual Constructor, without clone()

    - by Julien L.
    I want to perform "deep copies" of an STL container of pointers to polymorphic classes. I know about the Prototype design pattern, implemented by means of the Virtual Ctor Idiom, as explained in the C++ FAQ Lite, Item 20.8. It is simple and straightforward: struct ABC // Abstract Base Class { virtual ~ABC() {} virtual ABC * clone() = 0; }; struct D1 : public ABC { virtual D1 * clone() { return new D1( *this ); } // Covariant Return Type }; A deep copy is then: for( i = 0; i < oldVector.size(); ++i ) newVector.push_back( oldVector[i]->clone() ); Drawbacks As Andrei Alexandrescu states it: The clone() implementation must follow the same pattern in all derived classes; in spite of its repetitive structure, there is no reasonable way to automate defining the clone() member function (beyond macros, that is). Moreover, clients of ABC can possibly do something bad. (I mean, nothing prevents clients to do something bad, so, it will happen.) Better design? My question is: is there another way to make an abstract base class clonable without requiring derived classes to write clone-related code? (Helper class? Templates?) Following is my context. Hopefully, it will help understanding my question. I am designing a class hierarchy to perform operations on a class Image: struct ImgOp { virtual ~ImgOp() {} bool run( Image & ) = 0; }; Image operations are user-defined: clients of the class hierarchy will implement their own classes derived from ImgOp: struct CheckImageSize : public ImgOp { std::size_t w, h; bool run( Image &i ) { return w==i.width() && h==i.height(); } }; struct CheckImageResolution; struct RotateImage; ... Multiple operations can be performed sequentially on an image: bool do_operations( std::vector< ImgOp* > v, Image &i ) { std::for_each( v.begin(), v.end(), /* bind2nd(mem_fun(&ImgOp::run), i ...) don't remember syntax */ ); } int main( ... ) { std::vector< ImgOp* > v; v.push_back( new CheckImageSize ); v.push_back( new CheckImageResolution ); v.push_back( new RotateImage ); Image i; do_operations( v, i ); } If there are multiple images, the set can be split and shared over several threads. To ensure "thread-safety", each thread must have its own copy of all operation objects contained in v -- v becomes a prototype to be deep copied in each thread.

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Opening Macro definitions: tdfx_span.c: lvalue required as left operand of assignment

    - by anttir
    Hi, I'm trying to compile X11R6-7.0 under Ubuntu maverick and got some weird compilation errors I'm unable to resolve myself. I needed X11R6-7.0 as ati catalyst drivers don't support newer xorg and oss drivers don't support 3d acceleration of my hardware. Anyone know what this error message means? I know some C but I got a bit confused. Does it mean GET_FB_DATA macro returned NULL or some method/property not set? Any further insight how to "debug" preprocessor definitions at this point would be great. I don't think I can print anything useful with #error. The error I get: tdfx_span.c: In function ‘tdfxDDWriteDepthPixels’: tdfx_span.c:976: error: lvalue required as left operand of assignment tdfx_span.c:1008: error: lvalue required as left operand of assignment tdfx_span.c: In function ‘write_stencil_pixels’: tdfx_span.c:1242: error: lvalue required as left operand of assignment the Code: 958- switch (depth_size) { 959- case 16: 960- GetBackBufferInfo(fxMesa, &backBufferInfo); 961- /* 962- * Note that the _LOCK macro adds a curly brace, 963- * and the UNLOCK macro removes it. 964- */ 965- WRITE_FB_SPAN_LOCK(fxMesa, info, 966- GR_BUFFER_AUXBUFFER, GR_LFBWRITEMODE_ANY); 967- { 968- LFBParameters ReadParams; 969- GetFbParams(fxMesa, &info, &backBufferInfo, 970- &ReadParams, sizeof(GLushort)); 971- for (i = 0; i < n; i++) { 972- if (mask[i] && visible_pixel(fxMesa, x[i], y[i])) { 973- xpos = x[i] + fxMesa->x_offset; 974- ypos = bottom - y[i]; 975- d16 = depth[i]; 976: PUT_FB_DATA(&ReadParams, GLushort, xpos, ypos, d16); 977- } 978- } 979- } 980- WRITE_FB_SPAN_UNLOCK(fxMesa, GR_BUFFER_AUXBUFFER); 981- break; 982- case 24: And relative macros: #define GET_FB_DATA(ReadParamsp, type, x, y) \ (((x) < (ReadParamsp)->firstWrappedX) \ ? (((type *)((ReadParamsp)->lfbPtr)) \ [(y) * ((ReadParamsp)->LFBStrideInElts) \ + (x)]) \ : (((type *)((ReadParamsp)->lfbWrapPtr)) \ [((y)) * ((ReadParamsp)->LFBStrideInElts) \ + ((x) - (ReadParamsp)->firstWrappedX)])) #define GET_ORDINARY_FB_DATA(ReadParamsp, type, x, y) \ (((type *)((ReadParamsp)->lfbPtr)) \ [(y) * ((ReadParamsp)->LFBStrideInElts) \ + (x)]) #define GET_WRAPPED_FB_DATA(ReadParamsp, type, x, y) \ (((type *)((ReadParamsp)->lfbWrapPtr)) \ [((y)) * ((ReadParamsp)->LFBStrideInElts) \ + ((x) - (ReadParamsp)->firstWrappedX)]) #define PUT_FB_DATA(ReadParamsp, type, x, y, value) \ (GET_FB_DATA(ReadParamsp, type, x, y) = (type)(value)) #define PUT_ORDINARY_FB_DATA(ReadParamsp, type, x, y, value) \ (GET_ORDINARY_FB_DATA(ReadParamsp, type, x, y) = (type)(value)) #define PUT_WRAPPED_FB_DATA(ReadParamsp, type, x, y, value) \ (GET_WRAPPED_FB_DATA(ReadParamsp, type, x, y) = (type)(value)) The LFBParameters Struct 483-typedef struct 484-{ 485- void *lfbPtr; 486- void *lfbWrapPtr; 487- FxU32 LFBStrideInElts; 488- GLint firstWrappedX; 489-} 490:LFBParameters; Thanks for looking.

    Read the article

  • c++ class member functions instatiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (vc++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • What's the C strategy to "imitate" a C++ template ?

    - by Andrei Ciobanu
    After reading some examples on stackoverflow, and following some of the answers for my previous questions (1), I've eventually come with a "strategy" for this. I've come to this: 1) Have a declare section in the .h file. Here I will define the data-structure, and the accesing interface. Eg.: /** * LIST DECLARATION. (DOUBLE LINKED LIST) */ #define NM_TEMPLATE_DECLARE_LIST(type) \ typedef struct nm_list_elem_##type##_s { \ type data; \ struct nm_list_elem_##type##_s *next; \ struct nm_list_elem_##type##_s *prev; \ } nm_list_elem_##type ; \ typedef struct nm_list_##type##_s { \ unsigned int size; \ nm_list_elem_##type *head; \ nm_list_elem_##type *tail; \ int (*cmp)(const type e1, const type e2); \ } nm_list_##type ; \ \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)); \ \ (...other functions ...) 2) Wrap the functions in the interface inside MACROS: /** * LIST INTERFACE */ #define nm_list(type) \ nm_list_##type #define nm_list_elem(type) \ nm_list_elem_##type #define nm_list_new(type,cmp) \ nm_list_new_##type##_(cmp) #define nm_list_delete(type, list, dst) \ nm_list_delete_##type##_(list, dst) #define nm_list_ins_next(type,list, elem, data) \ nm_list_ins_next_##type##_(list, elem, data) (...others...) 3) Implement the functions: /** * LIST FUNCTION DEFINITIONS */ #define NM_TEMPLATE_DEFINE_LIST(type) \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)) \ {\ nm_list_##type *list = NULL; \ list = nm_alloc(sizeof(*list)); \ list->size = 0; \ list->head = NULL; \ list->tail = NULL; \ list->cmp = cmp; \ }\ void nm_list_delete_##type##_(nm_list_##type *list, \ void (*destructor)(nm_list_elem_##type elem)) \ { \ type data; \ while(nm_list_size(list)){ \ data = nm_list_rem_##type(list, tail); \ if(destructor){ \ destructor(data); \ } \ } \ nm_free(list); \ } \ (...others...) In order to use those constructs, I have to create two files (let's call them templates.c and templates.h) . In templates.h I will have to NM_TEMPLATE_DECLARE_LIST(int), NM_TEMPLATE_DECLARE_LIST(double) , while in templates.c I will need to NM_TEMPLATE_DEFINE_LIST(int) , NM_TEMPLATE_DEFINE_LIST(double) , in order to have the code behind a list of ints, doubles and so on, generated. By following this strategy I will have to keep all my "template" declarations in two files, and in the same time, I will need to include templates.h whenever I need the data structures. It's a very "centralized" solution. Do you know other strategy in order to "imitate" (at some point) templates in C++ ? Do you know a way to improve this strategy, in order to keep things in more decentralized manner, so that I won't need the two files: templates.c and templates.h ?

    Read the article

  • Php plugin to replace '->' with '.' as the member access operator ? Or even better: alternative synt

    - by Gigi
    Present day usable solution: Note that if you use an ide or an advanced editor, you could make a code template, or record a macro that inserts '->' when you press Ctrl and '.' or something. Netbeans has macros, and I have recorded a macro for this, and I like it a lot :) (just click the red circle toolbar button (start record macro),then type -> into the editor (thats all the macro will do, insert the arrow into the editor), then click the gray square (stop record macro) and assign the 'Ctrl dot' shortcut to it, or whatever shortcut you like) The php plugin: The php plugin, would also have to have a different string concatenation operator than the dot. Maybe a double dot ? Yea... why not. All it has to do is set an activation tag so that it doesnt replace / interpreter '.' as '->' for old scripts and scripts that dont intent do use this. Something like this: <php+ $obj.i = 5 ?> (notice the modified '<?php' tag to '<?php+' ) This way it wouldnt break old code. (and you can just add the '<?php+' code template to your editor and then type 'php tab' (for netbeans) and it would insert '<?php+' ) With the alternative syntax method you could even have old and new syntax cohabitating on the same page like this (I am illustrating this to show the great compatibility of this method, not because you would want to do this): <?php+ $obj.i = 5; ?> <?php $obj->str = 'a' . 'b'; ?> You could change the tag to something more explanatory, in case somebody who doesnt know about the plugin reads the script and thinks its a syntax error <?php-dot.com $obj.i = 5; ?> This is easy because most editors have code templates, so its easy to assign a shortcut to it. And whoever doesnt want the dot replacement, doesnt have to use it. These are NOT ultimate solutions, they are ONLY examples to show that solutions exist, and that arguments against replacing '->' with '.' are only excuses. (Just admit you like the arrow, its ok : ) With this potential method, nobody who doesnt want to use it would have to use it, and it wouldnt break old code. And if other problems (ahem... excuses) arise, they could be fixed too. So who can, and who will do such a thing ?

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71  | Next Page >