Search Results

Search found 3984 results on 160 pages for 'modules'.

Page 34/160 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Why maven so slow compared to automake?

    - by ???'Lenik
    I have a Maven project consists of around 100 modules. I have reason to decompose the project to so many modules, and I don't think I should merge them in order to speed up the build process. I have read a lot of projects by other people, e.g., the Maven project itself, and Apache Archiva, and Hudson project, they all consists of a lot of modules, nearly 100 maybe, more or less. The problem is, to build them all need so much time, 3 hours for the first time build (this is acceptable because a lot of artifacts to download), and 15 minutes for the second build (this is not acceptable). For automake, things are similarly, the first time you need to configure the project, to prepare the magical config.h file, it's far more complex then what maven does. But it's still fast, maybe 10 seconds on my Debian box. After then, make install requires maybe 10 minutes for the first time build. However, when everything get prepared, the .o object files are generated, they don't have to be rebuild at all for the second time build. (In Maven, everything rebuild at everytime.) I'm very wondering, how guys working for Maven projects can bare this long time for each build, I'm just can't sit down calmly during each time Maven build, it took too long time, really.

    Read the article

  • Synchronous vs. asynchronous for publish subscribe communication between JavaScript objects

    - by natlee75
    I implemented the publish subscribe pattern in a JavaScript module to be used by entirely client-side oriented JavaScript objects. This module has nothing to do with client-server communications in any way, shape or form. My question is whether it's better for the publish method in such a module to be synchronous or asynchronous, and why. As a very simplified example let's say I'm building a custom UI for an HTML5 video player widget: One of my modules is the "video" module that contains the VIDEO element and handles the various features and events associated with that element. This would probably have a namespace something like "widgets.player.video." Another is the "controls" module that has the various buttons - play, pause, volume, scrub, fullscreen, etc. This might have a namespace along the lines of "widgets.player.controls." These two modules are children of a parent "player" module ("widgets.player" ??), and as such would have no inherent knowledge of each other when instantiated as children of the "player" object. The "controls" elements would obviously need to be able to effect some changes on the video (click "Play" and the video should play), and vice versa (video's "timeUpdate" event fires and the visual display of the current time in the controls should update). I could tightly couple these modules and pass references to each other, but I'd rather take a more loosely coupled approach by setting up a pubsub type module that both can subscribe to and publish from. SO (thanks for bearing with me) in this kind of a scenario is there an advantage one way or another for synchronous publication versus asynchronous publication? I've seen some solutions posted online that allow for either/or with a boolean flag whereas others automatically do it asynchronously. I haven't personally seen an implementation that just automatically goes with synchronous publication... is this because there's no advantage to it? I know that I can accomplish this with features provided by jQuery, but it seems that there may be too much overhead involved here. The publish subscribe pattern can be implemented with relatively lightweight code designed specifically for this particular purpose so I'd rather go with that then a more general purpose eventing system like jQuery's (which I'll use for more general eventing needs :-).

    Read the article

  • Unable to start VMWare Workstation after upgrade to 13.04

    - by pst007x
    After upgrading to 13.04 I am unable to start VMWorkstation. I get the following message: Before you can run VMware, several modules must be compiled and loaded into the running kernel. Kernel Headers 3.8.0-19-generic Kernel headers for version 3.8.0-19-generic were not found. If you have installed them in a non-default path you can specify the path below. Does anyone have any idea what to do next? Ubuntu 13.04 64bit If I direct the path to: /usr/src/linux-headers-3.8.0-19-generic I get the following message: C header files matching your running kernel were not found. Thanks Additional: As suggested I run this in terminal: cd /lib/modules/$(uname -r)/build/include/linux sudo ln -s ../generated/utsrelease.h sudo ln -s ../generated/autoconf.h sudo ln -s ../generated/uapi/linux/version.h However, now I get the following: Before you can run VMware, several modules must be compiled and loaded into the kernel CANCEL / INSTALL I INSTALL, the window closes and nothing happens.... Any ideas? ADDITIONAL: I installed this: sudo apt-get install open-vm-tools open-vm-tools-dev open-vm-dkms open-vm-toolbox open-vm-tools-dev And it all launched... Many thanks for the suggestions and help... This is what I love about Ubuntu... it has a great helpful community... ! Note: Also found this which may help others too: HERE ADDITIONAL ERROR: Could not open /dev/vmmon: Is a directory. Please make sure that the kernel module `vmmon' is loaded. Failed to initialize monitor device. Monitor settings all greyed out RESOLUTION: Re-installation of Nvidia Drivers

    Read the article

  • Why do I always get this error when using 'apt-get' commands?

    - by Venki
    I am using Ubuntu 14.04(with Unity). Just today(as of the date of this post) I did a sudo apt-get update && sudo apt-get upgrade and at the end of the 'Upgrade' process I got the following error :- Setting up crossplatformui (1.0.38) ... * Stopping ACPI services... [ OK ] * Starting ACPI services... [ OK ] package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.13.0-27-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.13.0-27-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory #include <linux/smp_lock.h> ^ compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-27-generic' make: *** [modules] Error 2 dpkg: error processing package crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) From then on whatever apt-get command I use(so far as I know, except apt-get update) I keep getting the above error at the end of the process. But whichever apt-get command I use does what it has to without fail.(For example I tried installing blender with sudo apt-get install blender and it installed fine though it showed the above error.) After this I even got a kernel update(from 3.13.0-27 to 3.13.0-29 via the Software Updater), but even now the issue persists. How do I solve this issue?

    Read the article

  • The best way to learn how to extend Orchard

    - by Bertrand Le Roy
    We do have tutorials on the Orchard site, but we can't cover all topics, and recently I've found myself more and more responding to forum questions by pointing people to an existing module that was solving a similar problem to the one the question was about. I really like this way of learning by example and from the expertise of others. This is one of the reasons why we decided that modules would by default come in source code form that we compile dynamically. it makes them easy to understand and easier to modify for your own purposes. Hackability FTW! But how do you crack open a module and look at what's inside? You can do it in two different ways. First, you can just install the module from the gallery, directly from your Orchard instance's admin panel. Once you've done that, you can just look into your Modules directory under the web site. There is now a subfolder with the name of the new module that contains a csproj that you can open in Visual Studio or add to your Orchard solution. Second, you can simply download the package (it's NuGet) and rename it to a .zip extension. NuGet being based on Zip, this will open just fine in Windows Explorer: What you want to dig into is the Content/Modules/[NameOfTheModule] folder, which is where the actual code is. Thanks to Jason Gaylord for the idea for this post.

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • Partner Webcast – Oracle Weblogic 12c for New Projects - 07 Nov 2013

    - by Thanos Terentes Printzios
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications.WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deployingenterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013 -  9am UTC/11am EET Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • wrong kernel running after install

    - by ticktockhouse
    I have installed Ubuntu 14.04 from unetbootin. When it reboots after the install, uname -r says: 3.5.0-17-generic ..this means that no modules have loaded for the kernel that is actually installed (3.13.0-32-generic). Does anyone know why this kernel should be installed via the install process? Is it an artifact of using Unetbootin? Booting into the Unetbootin image gives the correct kernel, and thus the modules load. Knowing why is one thing, but I'm not sure how to remedy it now. Because no modules are loaded, I can't connect to the network or connect a USB drive. I've tried update-grub, which seems to find the correct kernel, but doesn't seem to tell the system to boot from it. I've also tried selecting the kernel at boot time using the "Advanced Options for Ubuntu", and the 3.13.x kernel is the only one listed. Selecting this lead to the 3.5.x kernel stubbornly loading.. I'm a fairly accomplished sysadmin, but this one has me flummoxed :) Can anyone help?

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • dpkg reporting as installed, uninstalled kernels

    - by Tony Martin
    I have run the following command to remove old kernels: dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge and only the current kernel is now installed, which I have confirmed in synaptic and by checking my boot partition. However, when I run: dpkg --list | grep linux-image I get the following response: rc linux-image-3.13.0-30-generic 3.13.0-30.55 amd64 Linux kernel image for version 3.13.0 on 64 bit x86 SMP rc linux-image-3.13.0-32-generic 3.13.0-32.57 amd64 Linux kernel image for version 3.13.0 on 64 bit x86 SMP ii linux-image-3.13.0-34-generic 3.13.0-34.60 amd64 Linux kernel image for version 3.13.0 on 64 bit x86 SMP rc linux-image-extra-3.13.0-30-generic 3.13.0-30.55 amd64 Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP rc linux-image-extra-3.13.0-32-generic 3.13.0-32.57 amd64 Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP ii linux-image-extra-3.13.0-34-generic 3.13.0-34.60 amd64 Linux kernel extra modules for version 3.13.0 on 64 bit x86 SMP ii linux-image-generic 3.13.0.34.40 amd64 Generic Linux kernel image Probably not a problem, but just wondering why versions -30 and -32 are reported as present. Can it be rectified? TIA

    Read the article

  • ubuntu 10.04: boot error for custom compiled kernel - gave up wating for root device

    - by atharva
    Hi, I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platoform) and it is working fine. Now I have compiled kernel 2.6.37 from the downloaded from the kernel tree. I followed usual procedure of compileing kernel (make menuconfig,make. make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using upadate grub command. Update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different ubuntu forums: 1. disable uuid and point root=/dev/sda8 (sda8 is where my kernele image resides (both default kernel and compiled one) from /etc/default/grub 2. compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution ?

    Read the article

  • What's the right/standard way of achieving separation of concerns?

    - by Ghanima
    Some background: I want to start developing games, and taking some of the advice given in this site, I've started with something simple and familiar, such as pong, tetris, etc. I want to take as much time as needed to make sure that I have the basics right before moving on to something bigger. I have medium programming experience but I realize making games is a different thing. I find myself wondering many things like should this be in a separate class? Should this module handle this stuff or is it better to let other modules have that kind of functionality? For example, the bouncing of a ball in pong, right now is handled in the ball module, but maybe it's better that some other module did it. Right now I have different modules: one for the graphics, one for the game logic, and others for the objects (depending on the kind of movement required, not all the objects are alike). I know I am asking a lot, any tips you have will be very much appreciated. Short question: What's the right or standard way of separating the modules? What have you found most effective? Is it enough to just keep the drawing (graphics) and the logic separate? Is it necessary to have a lot of classes? (for example for the objects in the game, to handle the movement, etc)

    Read the article

  • Ubuntu 10.04: boot error for custom compiled kernel - gave up waiting for root device

    - by atharva
    I have installed lucid on my Lenevo Laptop (Y 410 series , x86 platform) and it is working fine. Now I have compiled kernel 2.6.37 downloaded from the kernel tree. I followed usual procedure of compiling kernel (make menuconfig, make, make modules etc). Then I created the initrd image using mkinitramfs and updated my grub using update-grub command. update-grub detects the initrd image of the compiled kernel. However when I boot from this kernel it gives me following error: Gave up waiting for root device. Common problems: -Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?) -Check root= (did the system wait for the right device?) -Missing modules (cat /proc/modules; ls /dev) ALERT! root=UUID=/... does not exist and then it falls onto initramfs prompt. I have tried following solutions discussed in different Ubuntu forums: disable uuid and point root=/dev/sda8 (sda8 is where my kernel image resides (both default kernel and compiled one) from /etc/default/grub compile kernel using CONFIG_DEVTMPFS=y suggested here Still I am unable to boot from the compile kernel. Could someone please suggest me the solution?

    Read the article

  • Oracle Weblogic 12c for New Projects–Webcast November 7th 2013

    - by JuergenKress
    Fast-growing organizations need to stay agile in the face of changing customer, business or market requirements. Oracle WebLogic Server 12c is the industry's best application server platform that allows you to quickly develop and deploy reliable, secure, scalable and manageable enterprise Java EE applications. WebLogic Server Java EE applications are based on standardized, modular components. WebLogic Server provides a complete set of services for those modules and handles many details of application behavior automatically, without requiring programming. New project applications are created by Java programmers, Web designers, and application assemblers. Programmers and designers create modules that implement the business and presentation logic for the application. Application assemblers assemble the modules into applications that are ready to deploy on WebLogic Server. Build and run high-performance enterprise applications and services with Oracle WebLogic Server 12c, available in three editions to meet the needs of traditional and cloud IT environments. Join us, in this webcast, as we will show you how WebLogic Server 12c helps you building and deploying enterprise Java EE applications with support for new features for lowering cost of operations, improving performance, enhancing scalability. Agenda Oracle WebLogic Server Introduction Application Development on WebLogic Using Java EE Overview of the Application Deployment Process Monitoring Application Performance Q&A November 07th, 2013   9am UTC/11am EET REGISTER NOW WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: education,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How to make the internal subwoofer work on an Asus G73JW?

    - by CodyLoco
    I have an Asus G73JW laptop which has an internal subwoofer built-in. Currently, the system detects the internal speakers as a 2.0 system (or I can change do 4.0 is the only other option). I found a bug report here: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/673051 which discusses the bug and according to them a fix was sent upstream back at the end of 2010. I would have thought this would have made it into 12.04 but I guess not? I tried following the link given at the very bottom to install the latest ALSA drivers, here: https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules however I keep running into an error when trying to install: sudo apt-get install linux-alsa-driver-modules-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-alsa-driver-modules-3.2.0-24-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-24-generic' I believe I have added the repository correctly: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa [sudo] password for codyloco: You are about to add the following PPA to your system: This PPA will be used to provide testing versions of packages for supported Ubuntu releases. More info: https://launchpad.net/~ubuntu-audio-dev/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.7apgZoNrqK --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4E9F485BF943EF0EABA10B5BD225991A72B194E5 gpg: requesting key 72B194E5 from hkp server keyserver.ubuntu.com gpg: key 72B194E5: public key "Launchpad Ubuntu Audio Dev team PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) And I also ran an update as well (followed the instructions on the fix above). Any ideas?

    Read the article

  • Is there a better way to organize my module tests that avoids an explosion of new source files?

    - by luser droog
    I've got a neat (so I thought) way of having each of my modules produce a unit-test executable if compiled with the -DTESTMODULE flag. This flag guards a main() function that can access all static data and functions in the module, without #including a C file. From the README: -- Modules -- The various modules were written and tested separately before being coupled together to achieve the necessary basic functionality. Each module retains its unit-test, its main() function, guarded by #ifdef TESTMODULE. `make test` will compile and execute all the unit tests, producing copious output, but importantly exitting with an appropriate success or failure code, so the `make test` command will fail if any of the tests fail. Module TOC __________ test obj src header structures CONSTANTS ---- --- --- --- -------------------- m m.o m.c m.h mfile mtab TABSZ s s.o s.c s.h stack STACKSEGSZ v v.o v.c v.h saverec_ f.o f.c f.h file ob ob.o ob.c ob.h object ar ar.o ar.c ar.h array st st.o st.c st.h string di di.o di.c di.h dichead dictionary nm nm.o nm.c nm.h name gc gc.o gc.c gc.h garbage collector itp itp.c itp.h context osunix.o osunix.c osunix.h unix-dependent functions It's compile by a tricky bit of makefile, m:m.c ob.h ob.o err.o $(CORE) itp.o $(OP) cc $(CFLAGS) -DTESTMODULE $(LDLIBS) -o $@ $< err.o ob.o s.o ar.o st.o v.o di.o gc.o nm.o itp.o $(OP) f.o where the module is compiled with its own C file plus every other object file except itself. But it's creating difficulties for the kindly programmer who offered to write the Autotools files for me. So the obvious way to make it "less weird" would be to bust-out all the main functions into separate source files. But, but ... Do I gotta?

    Read the article

  • 2012 EC Election Results

    - by heathervc
    The 2012 Fall Executive Committee Election process is now complete. The ballot closed at midnight pacific time on Monday, 29 October.  Congratulations to Cinterion Wireless Modules GmbH, Credit Suisse, Fujitsu Limited, Hewlett-Packard (all four candidates were ratified),  and CloudBees and London Java Community (two elected candidates) as the new and re-elected merged EC Members. For more information regarding the merged EC, please refer to JSR 355 and the JCP 2.9 Process Document. Ratified Seats: Cinterion Wireless Modules GmbH, Credit Suisse, Fujitsu Limited and Hewlett-Packard Open Election Seats: CloudBees and London Java Community Newly elected EC Members take their seats on Tuesday, 13 November 2012. Detailed Election Results:Number of Eligible Voters: 1131 Percent of Eligible Members Casting Votes: 23.70% Ratified Seats: Candidate Yes Votes (%) No Votes (%) Abstentions Cinterion Wireless Modules GmbH 156 (76) 48 (24) 64 Credit Suisse 205 (85) 35 (15) 28 Fujitsu Limited 195 (85) 35 (15) 38 Hewlett-Packard 182 (81) 44 (19) 42 Open Election Seats: The top two candidates have been elected. Candidate Votes (%) Cisco Systems 37 (7) CloudBees 101 (20) Giuseppe Dell'Abate 6 (1) Liferay, Inc. 35 (7) London Java Community 164 (33) MoroccoJUG 46 (9) North Sixty-One Ltd 26 (5) Software AG 19 (4) ZeroTurnaround 64 (13) None of the above 5 (1) For more information about the candidates who ran for the 2012 Executive Committee Election, please visit the 2012 Executive Committee Elections Nominees page.

    Read the article

  • CodeIgniter Error Log Info + Errors

    - by fatnjazzy
    Hi, IS there a way to save in the log, Info + Errors without debug? Howcome debug level apears with info? If i want to log info "Account id 4345 was deleted by Admin", why do i need to see all of these: DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Hooks Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 URI Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Router Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Output Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Input Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Global POST and COOKIE data sanitized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Language Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Loader Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config file loaded: config/safe_charge.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config file loaded: config/web_fx.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: loadutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: objectsutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: logutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: password_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Database Driver Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 cURL Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Language Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Account MX_Controller Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/pending_account_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/proccess_accounts_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/web_fx_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/trader_account_type_spreads.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/trader_accounts.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized Thanks

    Read the article

  • How can I get sikuli-ide to work?

    - by ayckoster
    I installed sikuli-ide with sudo apt-get install sikuli-ide Everything was fine until I tried to start it from the terminal. I typed sikuli-ide But the only response I got was [info] locale: en_US The application was not started, furthermore there is no desktop file and sikuli-ide does not show up in Dash Home. I guess there is something wrong with the package. I run Ubuntu 12.10 64bit. I tried to install it (Sikuli-X-1.0rc3 (r905)-linux-x86_64.zip) from their page, now the IDE starts, but when I try to execute a simple script I get the following error: [error] Stopped [error] An error occurs at line 1 [error] Error message: Traceback (most recent call last): File "", line 1, in File "/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py", line 3, in File "/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/Sikuli.py", line 22, in java.lang.UnsatisfiedLinkError: /home/ayckoster/opt/Sikuli-IDE/libs/libVisionProxy.so: libml.so.2.1: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1935) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1860) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1821) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at com.wapmx.nativeutils.jniloader.NativeLoader.loadLibrary(NativeLoader.java:44) at org.sikuli.script.Finder.(Finder.java:33) at java.lang.Class.forName0(Native Method) at java.lang. Class.forName(Class.java:264) at org.python.core.Py.loadAndInitClass(Py.java:895) at org.python.core.Py.findClassInternal(Py.java:830) at org.python.core.Py.findClassEx(Py.java:881) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:133) at org.python.core.packagecache.PackageManager.findClass(PackageManager.java:28) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:122) at org.python.core.PyJavaPackage.__findattr_ex__(PyJavaPackage.java:137) at org.python.core.PyObject.__findattr__(PyObject.java:863) at org.python.core.imp.import_name(imp.java:849) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importFromAs(imp.java:978) at org.python.core.imp.importFrom(imp.java:954) at sikuli.Sikuli$py.f$0(/home/ayckoster/opt/Sikuli-IDE/siku li-script.jar/Lib/sikuli/Sikuli.py:211) at sikuli.Sikuli$py.call_function(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/Sikuli.py) at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.imp.createFromCode(imp.java:386) at org.python.core.util.importer.importer_load_module(importer.java:109) at org.python.modules.zipimport.zipimporter.zipimporter_load_module(zipimporter.java:161) at org.python.modules.zipimport.zipimporter$zipimporter_load_module_exposer.__call__(Unknown Source) at org.python.core.PyBuiltinMethodNarrow.__call__(PyBuiltinMethodNarrow.java:47) at org.python.core.imp.loadFromLoader(imp.java:513) at org.python.core.imp.find_module(imp.java:467) at org.python.core.PyModule.impAttr(PyModule.java:100) at org.python.core.imp.import_next(imp.java:715) at org.python.core.imp.import_name(imp.java:824) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importAll(imp.java:998) at sikuli$py.f$0(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py:3) at sikuli$py.call_function(/home/ayckoster/opt/Sikuli-IDE/sikuli-script.jar/Lib/sikuli/__init__.py) at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.imp.createFromCode(imp.java:386) at org.python.core.util.importer.importer_load_module(importer.java:109) at org.python.modules.zipimport.zipimporter.zipimporter_load_module(zipimporter.java:161) at org.python.modules.zipimport.zipimporter$zipimporter_load_module_exposer.__call__(Unknown Source) at org.python.core.PyBuiltinMethodNarrow.__call__(PyBuiltinMethodNarrow.java:47) at org.python.core.imp.loadFromLoader(imp.java:513) at org.python.core.imp.find_module(imp.java:467) at org.python.core.imp.import_next(imp.java:713) at or g.python.core.imp.import_name(imp.java:824) at org.python.core.imp.importName(imp.java:884) at org.python.core.ImportFunction.__call__(__builtin__.java:1220) at org.python.core.PyObject.__call__(PyObject.java:357) at org.python.core.__builtin__.__import__(__builtin__.java:1173) at org.python.core.imp.importAll(imp.java:998) at org.python.pycode._pyx2.f$0(:1) at org.python.pycode._pyx2.call_function() at org.python.core.PyTableCode.call(PyTableCode.java:165) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.Py.runCode(Py.java:1261) at org.python.core.Py.exec(Py.java:1305) at org.python.util.PythonInterpreter.exec(PythonInterpreter.java:206) at org.sikuli.script.ScriptRunner.runPython(ScriptRunner.java:61) at org.sikuli.ide.SikuliIDE$ButtonRun.runPython(SikuliIDE.java:1572) at org.sikuli.ide.SikuliIDE$ButtonRun$1.run(SikuliIDE.java:1677) java.lang.UnsatisfiedLinkError: java.lang.UnsatisfiedLinkError: /home/ayckoster/opt/Sikuli-IDE/libs/libVisionProxy.so: libml.so.2.1: cannot open shared object file: No such file or directory If I try to use the click() method from the gui it fails. So I created my own click method and it look like this: This cannot be executed and produces the error above.

    Read the article

  • Adding Blog to Your Orchard Website

    - by hajan
    One of the common features in today’s content management systems is to provide you the ability to create your own blog in your website. Also, having a blog is one of the very often needed features for various types of websites. Out of the box, Orchard gives you this, so you can create your own blog in your Orchard website on a pretty easy way. Besides the fact that you can very easily create your own blog, Orchard also gives you some extra features in relation with the support of blogging, such as connecting third-party client applications (e.g. Windows Live Writer) to your blog, so that you can publish blog posts remotely. You can already find all the information provided in this blog post on the http://orchardproject.net website, however I thought it would be nice to make summary in one blog post. I assume you have already installed Orchard and you are already familiar with its environment and administration dashboard. If you haven’t, please read this blog post first.   CREATE YOUR BLOG First of all, go to Orchard Administration Dashboard and click on Blog in the left menu Once you are there, you will see the following screen   Fill the form with all needed data, as in the following example and click Save Right after, you should see the following screen Click New post, and add your first post. After that, go to Homepage (click Your Site in the top-left corner) and you should see the Blog link in your menu After clicking on Blog, you will be directed to the following page Once you click on My First Post, you will see that your blog already supports commenting ability (you can enable/disable this from Administration dashboard in your blog settings) Added comment Adding new comment Submit comment So, with following these steps, you have already setup your blog in your Orchard website.   CONNECT YOUR BLOG WITH WINDOWS LIVE WRITER Since many bloggers prepare their blog posts using third-party client applications, like Windows Live Writer, its very useful if your blog engine has the ability to work with these third-party applications and enable them to make remote posting and publishing. The client applications use XmlRpc interface in order to have the ability to manage and publish the blogs remotely. What is great about Orchard is that it gives you out of the box the XmlRpc and Remote Publishing modules. What you only need to do is to enable these features from the Modules in your Orchard Administration Dashboard. So, lets go through the steps of enabling and making your previously created blog able to work with third-party client applications for blogging. 1. Go to Administration Dashboard and click the Modules After clicking the Modules, you will see the following page: As you can see, you already have Remote Blog Publishing and XmlRpc features for Content Publishing, but both are disabled by default. So, if you click Enable only on Remote Blog Publishing, you will see both of them enabled at once since they are dependent features. After you click Enable, if everything is Ok, the following message should be displayed: So, now we have the featured enabled and ready... The next thing you need to do is to open Windows Live Writer. First, open Windows Live Writer and in your Blog Accounts, click on Add blog account In the next window, chose Other services After that, click on your Blog link in the Orchard website and copy the URL, my URL (on localhost development server) is: http://localhost:8191/blog Then, add your login credentials you use to login in Orchard and click Next. After that, if you have setup everything successfully, the Windows Live Writer will do the rest Once it finishes, you will have window where you can specify the name of your blog you have just connected your Windows Live Writer to... Then... you are done. You can see Windows Live Writer has detected the Orchard theme I am using After you finish with the blog post, click on Publish and refresh the Blog page in your Orchard website You see, we have the blog post directly posted from Windows Live Writer to my Orchard Blog. I hope this was useful blog post. Regards, Hajan Reference and other useful posts: Build incredible content-driven websites using Orchard CMS Create blog on your site with Orchard CMS Blogging using Windows Live Writer in your Orchard CMS Blog Orchard Website

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Help trying to get two-finger scrolling to work on Asus UL80VT

    - by Dan2k3k4
    Multi-touch works fine on Windows 7 with: two-fingers scroll vertical and horizontally, two-finger tap for middle click, and three-finger tap for right click. However with Ubuntu, I've never been able to get multi-touch to "save" and work, I was able to get it to work a few times but after restarting - it would just reset back. I have the settings for two-finger scrolling on: Mouse and Touchpad Touchpad Two-finger scrolling (selected) Enable horizontal scrolling (ticked) The cursor stops moving when I try to scroll with two fingers, but it doesn't actually scroll the page. When I perform xinput list, I get: Virtual core pointer id=2 [master pointer (3)] ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ETPS/2 Elantech ETF0401 id=13 [slave pointer (2)] I've tried to install some 'synaptics-dkms' bug-fix (from a few years back) but that didn't work, so I removed that. I've tried installing 'uTouch' but that didn't seem to do anything so removed it. Here's what I have installed now: dpkg --get-selections installed-software grep 'touch\|mouse\|track\|synapt' installed-software libsoundtouch0 --- install libutouch-evemu1 --- install libutouch-frame1 --- install libutouch-geis1 --- install libutouch-grail1 --- install printer-driver-ptouch --- install ptouch-driver --- install xserver-xorg-input-multitouch --- install xserver-xorg-input-mouse --- install xserver-xorg-input-vmmouse --- install libnetfilter-conntrack3 --- install libxatracker1 --- install xserver-xorg-input-synaptics --- install So, I'll start again, what should I do now to get two-finger scrolling to work and ensure it works after restarting? Also doing: synclient TapButton1=1 TapButton2=2 TapButton3=3 ...works but doesn't save after restarting. However doing: synclient VertTwoFingerScroll=1 HorizTwoFingerScroll=1 Does NOT work to fix the two-finger scrolling. Output of: cat /var/log/Xorg.0.log | grep -i synaptics [ 4.576] (II) LoadModule: "synaptics" [ 4.577] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.577] (II) Module synaptics: vendor="X.Org Foundation" [ 4.577] (II) Using input driver 'synaptics' for 'ETPS/2 Elantech ETF0401' [ 4.577] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: x-axis range 0 - 1088 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: y-axis range 0 - 704 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: pressure range 0 - 255 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: finger width range 0 - 16 [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: buttons: left right middle double triple scroll-buttons [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: Vendor 0x2 Product 0xe [ 4.584] (--) synaptics: ETPS/2 Elantech ETF0401: touchpad found [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: (accel) MinSpeed is now constant deceleration 2.5 [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: MaxSpeed is now 1.75 [ 4.588] (**) synaptics: ETPS/2 Elantech ETF0401: AccelFactor is now 0.154 [ 4.589] (--) synaptics: ETPS/2 Elantech ETF0401: touchpad found Tried installing synaptiks but that didn't seem to work either, so removed it. Temporary Fix (works until I restart) Doing the following commands: modprobe -r psmouse modprobe psmouse proto=imps Works but now xinput list shows up as: Virtual core pointer id=2 [master pointer (3)] ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ImPS/2 Generic Wheel Mouse id=13 [slave pointer (2)] Instead of Elantech, and it gets reset when I reboot. Solution (not ideal for most people) So, I ended up reinstalling a fresh 12.04 after indirectly playing around with burg and plymouth then removing plymouth which removed 50+ packages (I saw the warnings but was way too tired and assumed I could just 'reinstall' them all after (except that didn't work). Right now xinput list shows up as: ? Virtual core pointer --- id=2 [master pointer (3)] ? ? Virtual core XTEST pointer --- id=4 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad --- id=13 [slave pointer (2)] grep 'touch\|mouse\|track\|synapt' installed-software libnetfilter-conntrack3 --- install libsoundtouch0 --- install libutouch-evemu1 --- install libutouch-frame1 --- install libutouch-geis1 --- install libutouch-grail1 --- install libxatracker1 --- install mousetweaks --- install printer-driver-ptouch --- install xserver-xorg-input-mouse --- install xserver-xorg-input-synaptics --- install xserver-xorg-input-vmmouse --- install cat /var/log/Xorg.0.log | grep -i synaptics [ 4.890] (II) LoadModule: "synaptics" [ 4.891] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.892] (II) Module synaptics: vendor="X.Org Foundation" [ 4.892] (II) Using input driver 'synaptics' for 'ETPS/2 Elantech Touchpad' [ 4.892] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so [ 4.956] (II) synaptics: ETPS/2 Elantech Touchpad: ignoring touch events for semi-multitouch device [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: x-axis range 0 - 1088 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: y-axis range 0 - 704 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: pressure range 0 - 255 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: finger width range 0 - 15 [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: buttons: left right double triple [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: Vendor 0x2 Product 0xe [ 4.956] (--) synaptics: ETPS/2 Elantech Touchpad: touchpad found [ 4.980] () synaptics: ETPS/2 Elantech Touchpad: (accel) MinSpeed is now constant deceleration 2.5 [ 4.980] () synaptics: ETPS/2 Elantech Touchpad: MaxSpeed is now 1.75 [ 4.980] (**) synaptics: ETPS/2 Elantech Touchpad: AccelFactor is now 0.154 [ 4.980] (--) synaptics: ETPS/2 Elantech Touchpad: touchpad found So, if all else fails, reinstall Linux :/

    Read the article

  • I got my z-5 Logitech speakers to work, but whenever I restart, I have to reconfigure them

    - by The Bill
    This is the content of my alsa-base.conf file (for some reason, the entries preceded by # are bolded--anyway): autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 alias snd-card-0 snd-usb-audio alias snd-card-1 snd-hda-intel options snd-usb-audio index=0 options snd-hda-intel index=1 Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 options snd-usb-audio index=-2 options snd-usb-audio index=0 alias snd-card-0 snd-usb-audio alias snd-card-1 snd-hda-intel options snd-hda-intel index=1 I deleted a line that said something like "#Keep usb-audio from being loaded as first soundcard" and that made the speakers work for the first time (before this, they never showed up). I also added the last four lines. Anyway, what can I add to this so that I don't have to reconfigure them each time I restart? Currently, I have to open Sound Settings, then under the hardware tab, select Analog Stereo Output, and then unplug my USB speakers and plug them back in. This makes them pop up so that I can see them. Otherwise, it will not show my Z-5 speakers as a device that can be configured.

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Removing padding from structure in kernel module

    - by dexkid
    I am compiling a kernel module, containing a structure of size 34, using the standard command. make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules The sizeof(some_structure) is coming as 36 instead of 34 i.e. the compiler is padding the structure. How do I remove this padding? Running make V=1 shows the gcc compiler options passed as make -I../inc -C /lib/modules/2.6.29.4-167.fc11.i686.PAE/build M=/home/vishal/20100426_eth_vishal/organised_eth/src modules make[1]: Entering directory `/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE' test -e include/linux/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/linux/autoconf.h or include/config/auto.conf are missing."; \ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions ; rm -f /home/vishal/20100426_eth_vishal/organised_eth/src/.tmp_versions/* make -f scripts/Makefile.build obj=/home/vishal/20100426_eth_vishal/organised_eth/src gcc -Wp,-MD,/home/vishal/20100426_eth_vishal/organised_eth/src/.eth_main.o.d -nostdinc -isystem /usr/lib/gcc/i586-redhat-linux/4.4.0/include -Iinclude -I/usr/src/kernels/2.6.29.4-167.fc11.i686.PAE/arch/x86/include -include include/linux/autoconf.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Os -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i686 -mtune=generic -Wa,-mtune=generic32 -ffreestanding -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -Iarch/x86/include/asm/mach-generic -Iarch/x86/include/asm/mach-default -Wframe-larger-than=1024 -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -g -pg -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -fno-dwarf2-cfi-asm -DTX_DESCRIPTOR_IN_SYSTEM_MEMORY -DRX_DESCRIPTOR_IN_SYSTEM_MEMORY -DTX_BUFFER_IN_SYSTEM_MEMORY -DRX_BUFFER_IN_SYSTEM_MEMORY -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -O0 -Wall -DT_ETH_1588_051 -DALTERNATE_DESCRIPTORS -DEXT_8_BYTE_DESCRIPTOR -DNETHERNET_INTERRUPTS -DETH_IEEE1588_TESTS -DSNAPTYPSEL_TMSTRENA_TEVENTENA_TESTS -DT_ETH_1588_140_147 -DLOW_DEBUG_PRINTS -DMEDIUM_DEBUG_PRINTS -DHIGH_DEBUG_PRINTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(eth_main)" -D"KBUILD_MODNAME=KBUILD_STR(conxt_eth)" -c -o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.o /home/vishal/20100426_eth_vishal/organised_eth/src/eth_main.c

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >