Search Results

Search found 9417 results on 377 pages for 'auth module'.

Page 60/377 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • How to tell X.org to reload input device module?

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • Sudo asks for password twice with LDAP authentication

    - by Gnudiff
    I have Ubuntu 8.04 LTS machine and Windows 2003 AD domain. I have succesfully set up that I can log in with domain username and password, using domain prefix, like "domain+username". Upon login to machine it all works first try, however, for some reason when I try to sudo my logged in user, it asks for the password twice every time when I try sudo. It accepts the password after 2nd time, but not the first time. Once or twice I might think I just keep entering wrong pass the first time, but this is what happens always, any ideas of what's wrong? pam.conf is empty pam.d/sudo only includes common-auth & common-account, and common-auth is: auth sufficient pam_unix.so nullok_secure auth sufficient pam_winbind.so auth requisite pam_deny.so auth required pam_permit.so

    Read the article

  • How to extend a file definition from an existing module in the node?

    - by c33s
    I use an older version of the example42 mysql module, which defines the mysql.conf file but not its content. Mmy goal is to just include the mysql module and add a content definition in the node. class mysql { ... file { "mysql.conf": path => "${mysql::params::configfile}", mode => "${mysql::params::configfile_mode}", owner => "${mysql::params::configfile_owner}", group => "${mysql::params::configfile_group}", ensure => present, require => Package["mysql"], notify => Service["mysql"], } ... } node xyz { include mysql File["mysql.conf"] { content => template("mymodule/mysql.conf.erb")} } The above code produces a "Only subclasses can override parameters" What is the correct way to just add a content definition to an existing file definition?

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • Unable to uninstall maas completely

    - by user210844
    I'm not able to uninstall MAAS sudo apt-get purge maas ; sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done Package 'maas' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up maas-region-controller (1.2+bzr1373+dfsg-0ubuntu1) ... Considering dependency proxy for proxy_http: Module proxy already enabled Module proxy_http already enabled Module expires already enabled Module wsgi already enabled sed: -e expression #1, char 91: unterminated `s' command dpkg: error processing maas-region-controller (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of maas-dns: maas-dns depends on maas-region-controller (= 1.2+bzr1373+dfsg-0ubuntu1); however: Package maas-region-controller is not configured yet. dpkg: error processing maas-dns (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: maas-region-controller maas-dns E: Sub-process /usr/bin/dpkg returned an error code (1) Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up maas-region-controller (1.2+bzr1373+dfsg-0ubuntu1) ... Considering dependency proxy for proxy_http: Module proxy already enabled Module proxy_http already enabled Module expires already enabled Module wsgi already enabled sed: -e expression #1, char 91: unterminated `s' command dpkg: error processing maas-region-controller (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of maas-dns: maas-dns depends on maas-region-controller (= 1.2+bzr1373+dfsg-0ubuntu1); however: Package maas-region-controller is not configured yet. dpkg: error processing maas-dns (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: maas-region-controller maas-dns E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • How to turn on/off code modules?

    - by Safran Ali
    I am trying to run multiple sites using single code base and code base consist of the following module (i.e. classes) User module Q & A module Faq module and each class works on MVC pattern i.e. it consist of Entity class Helper class (i.e. static class) View (i.e. pages and controls) and let's say I have 2 sites site1.com and site2.com. And I am trying to achieve following functionality site1.com can have User, Q & A and Faq module up and running site2.com can have User and Q & A module live while Faq module is switched off but it can be turned-on if needed, so my query here is what is the best way to achieve such functionality Do I introduce a flag bit that I check on every page and control belonging to that module? It's more like CMS where you can turn on/off different features. I am trying to get my head around it, please provide me with an example or point out if I am taking the wrong approach.

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • How to configure ubuntu ldap client to get password policies from server?

    - by Rafaeldv
    I have a ldap server on CentOS, 389-ds. I configured the client, ubuntu 12.04, to authenticate on that base and it works very well. But it don't gets the password policies from server. For example, if i set the policy to force user to change the password on first login, ubuntu ignores it and logs him in, always. How can i setup the client to get the policies? Here are the client files: /etc/nsswitch.conf passwd: files ldap group: files ldap shadow: files ldap hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis sudoers: ldap files common-auth auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_ldap.so use_first_pass auth requisite pam_deny.so auth required pam_permit.so auth optional pam_cap.so common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 default=ignore] pam_ldap.so account requisite pam_deny.so account required pam_permit.so common-password password requisite pam_cracklib.so retry=3 minlen=8 difok=3 password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass password requisite pam_deny.so password required pam_permit.so password optional pam_gnome_keyring.so common-session session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_umask.so session required pam_unix.so session optional pam_ldap.so session optional pam_ck_connector.so nox11 session optional pam_mkhomedir.so skel=/etc/skel umask=0022 /etc/ldap.conf base dc=a,dc=b,dc=c uri ldaps://a.b.c/ ldap_version 3 rootbinddn cn=directory manager pam_password md5 sudoers_base ou=SUDOers,dc=a,dc=b,dc=c pam_lookup_policy yes pam_check_host_attr yes nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,colord,daemon,games,gnats,hplip,irc,kernoops,libuuid,lightdm,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,sshd,sync,sys,syslog,usbmux,uucp,whoopsie,www-data /etc/ldap/ldap.conf BASE dc=a,dc=b,dc=c URI ldaps://a.b.c/ ssl on use_sasl no tls_checkpeer no sudoers_base ou=SUDOers,dc=a,dc=b,dc=c sudoers_debug 2 pam_lookup_policy yes pam_check_host_attr yes pam_lookup_policy yes pam_check_host_attr yes TLS_CACERT /etc/ssl/certs/ca-certificates.crt TLS_REQCERT never

    Read the article

  • DRUPAL: Spamspan module... how does it work ?

    - by Patrick
    hi, I've installed http://drupal.org/project/spamspan module for Drupal (drupal.org/project/spamspan), in order to obfuscate the e-mail addresses on my website. However I'm not sure it is working. I can see the email url from source html and I think the module is not filtering anything. I've added some e-mails in the text editor CKEditor. Is it automatically detecting these emails written in text editor ? I haven't find any settings menu, after I installed the module. I guess there is not. thanks

    Read the article

  • Zend_Navigation failing to load

    - by Grant Collins
    Hi, Following on from my earlier question, I am still having issues with loading the xml file into Zend_Navigation. I am now getting the following error message: <b>Fatal error</b>: Uncaught exception 'Zend_Navigation_Exception' with message 'Invalid argument: Unable to determine class to instantiate' in C:\www\mysite\development\website\library\Zend\Navigation\Page.php:223 I've tried to make my navigation.xml file look similar to the example on the Zend Documentation, However I just can't seem to get it to work. My XML file looks like this: <?xml version="1.0" encoding="UTF-8"?> <configdata> <navigation> <default> <label>Home</label> <controller>index</controller> <action>index</action> <module>default</module> <pages> <tour> <label>Tour</label> <controller>tour</controller> <action>index</action> <module>default</module> </tour> <blog> <label></label> <uri>http://blog.mysite.com</uri> </blog> <support> <label>Support</label> <controller>support</controller> <action>index</action> <module>default</module> </support> </pages> </default> <users> <label>Home</label> <controller>index</controller> <action>index</action> <module>users</module> <role>guser</role> <resource>owner</resource> <pages> <jobmanger> <label>Job Manager</label> <controller>jobmanager</controller> <action>index</action> <module>users</module> <role>guser</role> <resource>owner</resource> </jobmanger> <myaccount> <label>My Account</label> <controller>profile</controller> <action>index</action> <role>guser</role> <resource>owner</resource> <module>users</module> <pages> <detail> <label>Account Details</label> <controller>profile</controller> <action>detail</action> <module>users</module> <role>guser</role> <resource>owner</resource> <pages> <history> <label>Account History</label> <controller>profile</controller> <action>history</action> <module>users</module> <role>guser</role> <resource>owner</resource> </history> <password> <label>Change Password</label> <controller>profile</controller> <action>changepwd</action> <module>users</module> <role>employer</role> <resource>employers</resource> </password> </pages> </detail> ... </navigation> </configdata> Now I confess that I've totally got the wrong end of the stick with this, but rapidly running out of ideas, and it's been a long week. Thanks, Grant

    Read the article

  • How do I subtract a binding using a Guice module override?

    - by Jimmy Yuen Ho Wong
    So according to my testing, If you have something like: Module modA = new AbstractModule() { public void configure() { bind(A.class).to(AImpl.class); bind(C.class).to(ACImpl.class); bind(E.class).to(EImpl.class); } } Module modB = New AbstractModule() { public void configure() { bind(A.class).to(C.class); bind(D.class).to(DImpl.class); } } Guice.createInjector(Modules.overrides(modA, modB)); // gives me binding for A, C, E AND D with A overridden to A->C. But what if you want to remove the binding for E in modB? I can't seem to find a way to do this without having to break the bind for E into a separate module. Is there a way?

    Read the article

  • Linux: Ways to communicate with kernel module from user space.

    - by Inso Reiges
    Hello, What are the ways to communicate with a kernel module from user space? By communication i mean sending information and commands between the module and a user space process. I currently know of two way: open/close/read/write/ioctl on published device node. read/write on exported and hooked /proc file. More specifically, can someone advice the best way to communicate with a kernel module that does not actually drives any hardware and therefore should not be littering /dev with stub nodes that exists solely for ioctl calls? I mostly need to check its various status variables and send it a block of data with a request type tag and see if the request succeeded. Inso.

    Read the article

  • What does a linux device need to be seen by Hal?

    - by Jaime Soriano
    I'm trying to learn about device drivers on Linux Kernel, for that I've created three modules with: A bus type A device driver A fake device that does nothing now, only is registered Everything works fine, I can load the bus, the driver and the module that creates the device. Everything appears on sysfs, including the link between the device and the device driver that indicates that they are binded. And when the driver and device are loaded, I can see using udevadm monitor that also some events are provoked: KERNEL[1275564332.144997] add /module/bustest_driver (module) KERNEL[1275564332.145289] add /bus/bustest/drivers/bustest_example (drivers) UDEV [1275564332.157428] add /module/bustest_driver (module) UDEV [1275564332.157483] add /bus/bustest/drivers/bustest_example (drivers) KERNEL[1275564337.656650] add /module/bustest_device (module) KERNEL[1275564337.656817] add /devices/bustest_device (bustest) UDEV [1275564337.658294] add /module/bustest_device (module) UDEV [1275564337.664707] add /devices/bustest_device (bustest) But after everything, the device doesn't appear on hal. What else need a device to be seen by hal?

    Read the article

  • How to specify a web service URL within a Drupal module's simpletest?

    - by Matt V.
    I have a Drupal module that talks to a REST API on a separate server for user registration and authentication. The module runs on multiple sites which point to different servers which may run different versions of the REST API. Ideally, I'd like to be able to run each site against its own end-point, in case changes on the back end break things. Is there a way to dynamically specify a different end-point URL when running a test? Or do I have to edit the .test file for each site? I'm trying to keep the module's files as generic and flexible as possible. I guess I could have the .test file look for a .inc file that could override the URL, if needed for a particular site. Is there a better way though?

    Read the article

  • How can I create The Oatmeal like quizzes (http://theoatmeal.com/quizzes) using Drupal module quiz ?

    - by vr3690
    Hi, I am trying to create quizzes which are kind of like the ones found here : http://theoatmeal.com/quizzes on my drupal site. I am trying to use drupal's quiz module ( http://drupal.org/project/quiz ) Basically everyone answer, in every question, in a quiz will have some particular weightage. Say answer 1 will have 2 marks, ans two will have 3 marks, answer 3 will have 4 marks.. and so on. Eventually all these get added up and the result is shown according to the final tally of marks. Can anyone show me the steps as to how to make such quizzes using the quiz module or some other module/method..

    Read the article

  • How can I call python module inside versioned package folder?

    - by Yanhua
    I need write python codes which run inside a host application. The python codes should be deployed under a specific folder of the host application. I must put my entry python module under the root of the specific folder. And I want put all my other python codes and c/c++ dll under a sub folder, I prefer to name the sub folder like XXX-1.0, the number is the version of my python codes. The entry python module is just simple call a python module under the sub-folder. By this way different version python codes can be deployed together without collision. May I know it is possible or not? Thanks.

    Read the article

  • How can I create a class diagram with NetBeans' 6.8 UML module?

    - by Karussell
    It seems to me the UML module of NetBeans is a bit too much hidden. In NetBeans 6.5 it was very easy to create an UML diagram. No plugin installation necessary or sth. like. Read my post where I found a zip file to install the UML module. And now, after this procedure, I got the UML module back, but it seems to me that I cannot create class diagram with it. Do you know how I can do this with NetBeans 6.8? Update1: There seems to be no support Update2: Nevertheless somebody seems to got it working.

    Read the article

  • How can I import the sqlite3 module into Python 2.4?

    - by Tony
    The sqlite3 module is included in Python version 2.5+. However, I am stuck with version 2.4. I uploaded the sqlite3 module files, added the directory to sys.path, but I get the following error when I try to import it: Traceback (most recent call last): File "<stdin>", line 1, in ? File "sqlite3/__init__.py", line 23, in ? from dbapi2 import * File "sqlite3/dbapi2.py", line 26, in ? from _sqlite3 import * ImportError: No module named _sqlite3 The file '_sqlite3' is in lib-dynload, but if I include this in the sqlite3 directory, I get additional errors. Any suggestions? I am working in a limited environment; I don't have access to GCC, among other things.

    Read the article

  • Using the same modules in multiple projects

    - by Andreas Vinther
    I'm using Visual Studio 2010 and coding in VB.NET. My problem is that I've collected all the modules I've written and intend to reuse and placed them in a separate folder. When I want to add a module from the above folder to any given project, it takes a copy of the module and places in the project's source code folder, instead of referencing the module in the folder containing all the other modules. Is it possible to include a module in my project and leave it in the folder with all the other modules, so that when I improve upon a module, it'll affect all the projects that uses/references that module. Instead of me having to manually copy the new module to all the projects that uses/references the module. Right now I have multiple instances of the exact same module that i need to update manually when I improve code or add functionality?

    Read the article

  • Creating a search module dislpaying results in an iframe?

    - by ivayloc
    Hi, I have recently signed up for a travel affiliate program and I create a module with search fields to search in the affiliate program. What I need is a component with ..., so when I make a search the search results to shows in the ... in the component. In this way I can chose which other module to show or not with the search results. Could someone tell me how to do it or tell me a similar module/component. I would be grateful if you provide me with an answer or a solution of any kind! Thank you in advance!

    Read the article

  • Creating a search module displaying results in an iframe?

    - by ivayloc
    I recently signed up for a travel affiliate program and I'm creating a module with search fields to search in the affiliate program. What I need is a component with <iframe>...</iframe>, so when I make a search, the search results to shows in the <iframe>...</iframe> in the component. In this way I can chose which other module to show or not with the search results. Could someone tell me how to do it or tell me a similar module/component. I would be grateful if you provide me with an answer or a solution of any kind! Thank you in advance!

    Read the article

  • How do you like to define your module-wide variables in drupal 6?

    - by sprugman
    I'm in my module file. I want to define some complex variables for use throughout the module. For simple things, I'm doing this: function mymodule_init() { define('SOME_CONSTANT', 'foo bar'); } But that won't work for more complex structures. Here are some ideas that I've thought of: global: function mymodule_init() { $GLOBALS['mymodule_var'] = array('foo' => 'bar'); } variable_set: function mymodule_init() { variable_set('mymodule_var', array('foo' => 'bar')); } property of a module class: class MyModule { static $var = array('foo' => 'bar'); } Variable_set/_get seems like the most "drupal" way, but I'm drawn toward the class setup. Are there any drawbacks to that? Any other approaches out there?

    Read the article

  • FormsAuthentication.SetAuthCookie in OnAuthorization of custom attribute

    - by Prasad
    I am trying to set an auth cookie in OnAuthorization of my custom attribute in asp.net mvc(C#) application. when the session expires(New Session), i am setting an auth cookie again to make it available until the users logout. I have used the following to set the auth cookie, //set forms auth cookie FormsAuthentication.SetAuthCookie(strUserName, true); But when i check HttpContext.User.Identity.IsAuthenticated, it returns false. How to set an auth cookie in OnAuthorization of custom attribute?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >