Search Results

Search found 3564 results on 143 pages for 'the unix janitor'.

Page 71/143 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How do you determine the OWNER of an Oracle Database

    - by Kwang Mark Eleven
    When you install an Oracle database in a Unix server, the Unix user id you use for the installation becomes the OWNER of the database. What is the most reliable and general way of determining in a shell script which Unix user is the owner of an Oracle installation? I mean, can you perform a grep on a file created by the installation to find this information or shall I resort to use the ls command on a specific file on a specific directory. If the name of the file to be checked is also variable, I would need to have a way of determining the name and path to the file. Thanks in advance for your time

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • Oracle Identity Management Connector Overview

    - by Darin Pendergraft
    Oracle Identity Manager (OIM) is a complete Identity Governance system that automates access rights management, and provisions IT resources.  One important aspect of this system is the Identity Connectors that are used to integrate OIM with external, identity-aware applications. New in OIM 11gR2 PS1 is the Identity Connector Framework (ICF) which is the foundation for both OIM and Oracle Waveset.Identity Connectors perform several very important functions: On boarding accounts from trusted sources like SAP, Oracle E-Business Suite, & PeopleSoft HCM Managing users lifecycle in various Target systems through provisioning and recon operations Synchronizing entitlements from targets systems so that they are available in the OIM request catalog Fulfilling access grants and access revoke requests Some connectors may support Role Lifecycle Management Some connectors may support password sync from target to OIM The Identity Connectors are broken down into several families: The BMC Remedy Family BMC Remedy Ticket Management BMC Remedy User Management The Microsoft Family Microsoft Active Directory Microsoft Active Directory Password Sync Microsoft Exchange The Novell Family Novell eDirectory Novell GroupWise The Oracle E-Business Suite Family Oracle e-Business Employee Reconciliation Oracle e-Business User Management The PeopleSoft Family PeopleSoft Employee Reconciliation PeopleSoft User Management The SAP Family SAP CUA SAP Employee Reconciliation SAP User Management The UNIX Family UNIX SSH UNIX Telnet As you can see, there are a large number of connectors that support apps from a variety of vendors to enable OIM to manage your business applications and resources. If you are interested in finding out more, you can get documentation on these connectors on our OTN page at: http://www.oracle.com/technetwork/middleware/id-mgmt/downloads/connectors-101674.html

    Read the article

  • Quest releases NetVault Backup, Spotlight, Foglight, JClass, JProbe, Shareplex, Management Console and Authentication Services on Solaris 11

    - by user13333379
    Quest released the following products on Solaris 11 (SPARC, x64).: Quest NetVault Backup Server : v8.6.3, v8.6.1, v8.6  - Solaris 11, 10, 9 ; SPARC/x86/64 Quest NetVault Backup Client : v8.6.3, v8.6.1, v8.6  - Solaris 11, 10, 9 ; SPARC/x86/64 Quest Spotlight on Unix : v8.0 -Solaris 11, 10, 9  ; SPARC/x86/64 Quest Spotlight on Oracle : v9.0 - Solaris 11, 10, 9 ; SPARC/x86/64 Quest Authentication Services (formerly Vintela Authentication Services) : v4.0.3 - Solaris 11, 10, 9 ; SPARC/x86/64 Quest One Management Console for Unix (formerly Quest Identity Manager for Unix)  Solaris 11, 10, 9 ; SPARC/x86/64 Quest Foglight for Operating System : v5.6.5 -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest Foglight Agent Manager : v5.6.x -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest Foglight Cartridge for Infrastructure : v5.6.5 -Solaris 11, 10, 9  ; SPARC/x86/64 including zones Quest JClass : v6.5 -Solaris 11, 10, 9  ; SPARC/x86/64  Quest JProbe : v9.5 -Solaris 11: x86  Quest Shareplex for Oracle : v7.6.3 : Solaris 11, 10, 9 ; SPARC/x86/64

    Read the article

  • Executes a function until it returns a nil, collecting its values into a list

    - by Baldur
    I got this idea from XKCD's Hofstadter comic; what's the best way to create a conditional loop in (any) Lisp dialect that executes a function until it returns NIL at which time it collects the returned values into a list. For those who haven't seen the joke, it's goes that Douglas Hofstadter's “eight-word” autobiography consists of only six words: “I'm So Meta, Even This Acronym” containing continuation of the joke: (some odd meta-paraprosdokian?) “Is Meta” — the joke being that the autobiography is actually “I'm So Meta, Even This Acronym Is Meta”. But why not go deeper? Assume the acronymizing function META that creates an acronym from a string and splits it into words, returns NIL if the string contains but one word: (meta "I'm So Meta, Even This Acronym") ? "Is Meta" (meta (meta "I'm So Meta, Even This Acronym")) ? "Im" (meta (meta (meta "I'm So Meta, Even This Acronym"))) ? NIL (meta "GNU is Not UNIX") ? "GNU" (meta (meta "GNU is Not UNIX")) ? NIL Now I'm looking for how to implement a function so that: (so-function #'meta "I'm So Meta, Even This Acronym") ? ("I'm So Meta, Even This Acronym" "Is Meta" "Im") (so-function #'meta "GNU is Not Unix") ? ("GNU is Not Unix" "GNU") What's the best way of doing this?

    Read the article

  • How to compile ocaml to native code

    - by Indra Ginanjar
    i'm really interested learning ocaml, it fast (they said it could be compiled to native code) and it's functional. So i tried to code something easy like enabling mysql event scheduler. #load "unix.cma";; #directory "+mysql";; #load "mysql.cma";; let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; It work fine on ocaml interpreter, but when i was trying to compile it to native (i'm using ubuntu karmic), neither of these command worked ocamlopt -o mysqleventon mysqleventon.ml unix.cmxa mysql.cmxa ocamlopt -o mysqleventon mysqleventon.ml unix.cma mysql.cma i also tried ocamlc -c mysqleventon.ml unix.cma mysql.cma all of them resulting same message File "mysqleventon.ml", line 1, characters 0-1: Error: Syntax error Then i tried to remove the "# load", so the code goes like this let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; The ocamlopt resulting message File "mysqleventon.ml", line 1, characters 9-28: Error: Unbound value Mysql.quick_connect I hope someone could tell me, where did i'm doing wrong.

    Read the article

  • Django Querysets -- need a less expensive way to do this..

    - by rh0dium
    Hi all, I have a problem with some code and I believe it is because of the expense of the queryset. I am looking for a much less expensive (in terms of time) way to to this.. log.info("Getting Users") employees = Employee.objects.filter(is_active = True) log.info("Have Users") if opt.supervisor: if opt.hierarchical: people = getSubs(employees, " ".join(args)) else: people = employees.filter(supervisor__name__icontains = " ".join(args)) else: log.info("Filtering Users") people = employees.filter(name__icontains = " ".join(args)) | \ employees.filter(unix_accounts__username__icontains = " ".join(args)) log.info("Filtered Users") log.info("Processing data") np = [] for person in people: unix, p4, bugz = "No", "No", "No" if len(person.unix_accounts.all()): unix = "Yes" if len(person.perforce_accounts.all()): p4 = "Yes" if len(person.bugzilla_accounts.all()): bugz = "Yes" if person.cell_phone != "": exphone = fixphone(person.cell_phone) elif person.other_phone != "": exphone = fixphone(person.other_phone) else: exphone = "" np.append({ 'name':person.name, 'office_phone': fixphone(person.office_phone), 'position': person.position, 'location': person.location.description, 'email': person.email, 'functional_area': person.functional_area.name, 'department': person.department.name, 'supervisor': person.supervisor.name, 'unix': unix, 'perforce': p4, 'bugzilla':bugz, 'cell_phone': fixphone(exphone), 'fax': fixphone(person.fax), 'last_update': person.last_update.ctime() }) log.info("Have data") Now this results in a log which looks like this.. 19:00:55 INFO phone phone Getting Users 19:00:57 INFO phone phone Have Users 19:00:57 INFO phone phone Processing data 19:01:30 INFO phone phone Have data As you can see it's taking over 30 seconds to simply iterate over the data. That is way too expensive. Can someone clue me into a more efficient way to do this. I thought that if I did the first filter that would make things easier but seems to have no effect. I'm at a loss on this one. Thanks To be clear this is about 1500 employees -- Not too many!!

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • LibPCL issues on Ubuntu 13.10

    - by user254885
    i wanted to install the Point Cloud Library but it does not work i use an ODROID board(ARM processor) Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libpcl-all : Depends: libpcl-1.7-all but it is not going to be installed E: Unable to correct problems, you have held broken packages. by compiling v1.7 , i get these errors : /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__fcntl_nocancel': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:37: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__libc_fcntl': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:53: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:57: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-open64.o): In function `__libc_open64': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:41: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:45: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(cancellation.o):/build/buildd/eglibc-2.17/nptl/cancellation.c:96: more undefined references to `__libc_do_syscall' follow collect2: error: ld returned 1 exit status make[2]: *** [bin/pcl_convert_pcd_ascii_binary] Error 1 make[1]: *** [io/tools/CMakeFiles/pcl_convert_pcd_ascii_binary.dir/all] Error 2 make: *** [all] Error 2 i could not find anything in google to solve these errors i believe some packages were not ported for ARM processors any help would be appreciated $ dpkg --list | grep headers ii linux-headers-3.0.63-odroidx2 20130215 ii linux-headers-3.0.71-odroidx2 20130415 ii linux-headers-3.0.74-odroidx2 20130417 ii linux-headers-3.0.75-odroidx2 20130426 ii linux-headers-3.1.10-6 3.1.10-6.10 ii linux-headers-3.1.10-6-ac100 3.1.10-6.10 ii linux-headers-ac100 3.1.10.6.2 installing packages did'nt do well sudo apt-get install linux-generic [sudo] password for odroid: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: debugedit libasound2-dev libestools2.1-dev librpmbuild3 librpmsign1 thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-ko Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic Suggested packages: fdutils linux-doc-3.11.0 linux-source-3.11.0 linux-tools The following NEW packages will be installed: linux-generic linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 58.2 MB of archives. After this operation, 203 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-3.11.0-17-generic armhf 3.11.0-17.31 [44.5 MB] Get:2 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-generic armhf 3.11.0.17.18 [2,356 B] Get:3 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17 all 3.11.0-17.31 [12.6 MB] Get:4 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17-generic armhf 3.11.0-17.31 [1,128 kB] Get:5 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-generic armhf 3.11.0.17.18 [2,350 B] Get:6 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-generic armhf 3.11.0.17.18 [1,726 B] Fetched 58.2 MB in 13s (4,379 kB/s) Selecting previously unselected package linux-image-3.11.0-17-generic. (Reading database ... 258618 files and directories currently installed.) Unpacking linux-image-3.11.0-17-generic (from .../linux-image-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Examining /etc/kernel/preinst.d/ Done. Selecting previously unselected package linux-image-generic. Unpacking linux-image-generic (from .../linux-image-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-headers-3.11.0-17. Unpacking linux-headers-3.11.0-17 (from .../linux-headers-3.11.0-17_3.11.0-17.31_all.deb) ... Selecting previously unselected package linux-headers-3.11.0-17-generic. Unpacking linux-headers-3.11.0-17-generic (from .../linux-headers-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Selecting previously unselected package linux-headers-generic. Unpacking linux-headers-generic (from .../linux-headers-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-generic. Unpacking linux-generic (from .../linux-generic_3.11.0.17.18_armhf.deb) ... Setting up linux-image-3.11.0-17-generic (3.11.0-17.31) ... Running depmod. update-initramfs: deferring update (hook will be called later) cp: cannot stat ‘/boot/initrd.img-3.11.0-17-generic’: No such file or directory Failed to copy /boot/initrd.img-3.11.0-17-generic to /boot/initrd.img at /var/lib/dpkg/info/linux-image-3.11.0-17-generic.postinst line 730. dpkg: error processing linux-image-3.11.0-17-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.11.0-17-generic; however: Package linux-image-3.11.0-17-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured Setting up linux-headers-3.11.0-17 (3.11.0-17.31) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up linux-headers-3.11.0-17-generic (3.11.0-17.31) ... Examining /etc/kernel/header_postinst.d. Setting up linux-headers-generic (3.11.0.17.18) ... dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.11.0.17.18); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.11.0-17-generic linux-image-generic linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1) i had to uninstall these cos they were messing up other packages installation(buildessentials were already installed)

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • creating a new user Ubuntu

    - by Matt
    I am trying to new user that can sftp on a server....i did this ubuntu@ip-10-112-46-15:~$ sudo useradd jesse -p testPass ubuntu@ip-10-112-46-15:~$ sudo passwd jesse Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully but when i try to login via sftp I cant get in....am i missing something like adding a group or something the answer was PasswordAuthentication yes

    Read the article

  • pam auth via winbind, howto map primary group for users?

    - by dr gonzo
    I have unix users authenticating to an PDC (via winbind) and want to have the primary group of those users a local unix group (e.g. www-data). users have the group "domain users" with gid 10006 (as the gid winbind mapping) idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum groups = yes winbind enum users = yes winbind use default domain = yes winbind nested groups = yes but want that the primary group is 33 for all users (www-data) how to achieve that?

    Read the article

  • linux text editor for windows

    - by lego 69
    hello, can somebody recommend me good Linux text editor for Windows (if it exists), I wrote scripts for C-Shell using txt editor of windows but I have problem, it doesn't run cause windows is not UNIX, what can I do? I don't want to install linux for a few scripts, I do testing of my scripts via unix server (this server is not mine), thanks in advance

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Linux (redhat) how to change password to previous password

    - by ring bearer
    Is there a way to change password to value same as the previous password? I know this is a security flaw, but would like to know however. when I try this: [mrbean@wwwserver ~]$ passwd Changing password for user mrbean. Changing password for mrbean (current) UNIX password: New UNIX password: -- here i typed same password BAD PASSWORD: is too similar to the old one.

    Read the article

  • Fedora internet access and managing

    - by Fractal
    This questions speaks about a UNIX Fedora install, DWA-552 wireless adapter, and internet What are the required packages on a KDE GUI installation, and on the basic UNIX TUI installation, to access internet and manage wireless networks? On a larger scale, does anyone knows of an all encompassing list of basic functions (such as monitoring or hardware control) with their respective packages dependencies?

    Read the article

  • Informix & .NET

    - by Giuliano
    I'm connecting to Informix 6.5 using the IBM OLE DB Driver. Some clarification: Informix Dynamic Server Workgroup Edition. Please note that there are many versions of IDS/WE - it helps to be precise about which version you have. It might be something like 10.00.TC1; the version number could be as old as 7.xx.TCx, or as new as 11.50.TCx. If your server is not on Windows but on Unix - or is on 64-bit Windows - the letter 'T' would change to 'U' (32-bit Unix) or 'F' (64-bit - Unix or Windows), or even 'H' (a particular HP-UX configuration). When I try to create a dataset using the VS DataSet wizard, after clicking on test connection it tells me 'Base Table not found'? The OLEDB driver is the official one provided by IBM. The problem with the 'Table not found' dialog is that Visual Studio doesn't tell me the table name but only the error! Has anyone else had this problem?

    Read the article

  • qt in windows7 environment

    - by sneha
    Hello everyone, i am having problem with running an example from qt which uses win32 libraries when i compile i dnt get any errors but when i run it is not able to open the application (.exe) file in windows 7.but when i compile this example in windowsXP it works fine. can anyone let me know whether i need to change my .pro file inorder to get it worked under windows 7. PLease help me out.thanks in advance. here is my .pro file # ------------------------------------------------- # Project created by QtCreator 2010-04-16T11:45:43 # ------------------------------------------------- QT += network QT += xml QT += opengl TARGET = Application TEMPLATE = app SOURCES += main.cpp \ mainwindow.cpp \ Tools.cpp \ Objects.cpp HEADERS += mainwindow.h \ Tools.h\ Objects.h unix { OBJECTS_DIR = .obj MOC_DIR = .moc } # UNIX installation isEmpty(PREFIX):PREFIX = /usr/local unix { headers.path = $$PREFIX/include/ZIP headers.files = $$HEADERS target.path = $$PREFIX/lib INSTALLS += headers \ target } !mac:x11:LIBS += -ldns_sd win32:LIBS += -ldnssd LIBPATH = C:/Temp/mDNSResponder-107.6/mDNSWindows/DLL/Debug INCLUDEPATH += c:/Temp/mDNSResponder-107.6/mDNSShared

    Read the article

  • Help needed to resolve RMI RemoteException

    - by Gabriel Parenza
    Hello friends, Any idea why do I get RemoteException while trying to invoke methods on Unix machine from Windows. I am inside the network and dont think this is because of firewall problem as I can do "telnet" from Windows to Unix box after starting the RMI server at the unix box. I also could not understand why is it going to local loopback IP? Stack Trace:: RemoteException occured, details java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused: connect java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused: connect Many thanks in advance.

    Read the article

  • MySQL JDBC date issues with database server in different timezone

    - by Somatik
    I have a database server in "Europe/London" time zone and my web server in "Europe/Brussels". Since it is summer time now my application server has a 2 hour difference. I created a test to reproduce my issue: Query q = JPA.em().createNativeQuery("SELECT UNIX_TIMESTAMP(startDateTime) FROM `Event` WHERE `id` =574"); BigInteger unix = (BigInteger) q.getSingleResult(); System.out.println(unix + "000 UNIX_TIMESTAMP to BigInteger"); Query q2 = JPA.em().createNativeQuery("SELECT startDateTime FROM `Event` WHERE `id` =574"); Timestamp o = (Timestamp) q2.getSingleResult(); System.out.println(o.getTime() + " Timestamp"); The startDateTime column is defined as 'datetime' (but same issue with 'timestamp') The output I am getting is this: 1340291591000 UNIX_TIMESTAMP to BigInteger 1340284391000 Timestamp Reading java date objects results in a shift in time zone, how do I fix this? I would expect the jdbc driver to just set the "unix time" value it gets from the server in the Date object. (a proper solution should work with any timezone combination, not only for db in GMT)

    Read the article

  • AbsoluteTime with numeric argument behaves strangely.

    - by dreeves
    This is strange: DateList@AbsoluteTime[596523] returns {2078, 7, 2, 2, 42, 9.7849} But DateList@AbsoluteTime[596524] returns {1942, 5, 26, 20, 28, 39.5596} The question: What's going on? Note that AbsoluteTime with a numeric argument is undocumented. (I think I now know what it's doing but figured this is useful to have as a StackOverflow question for future reference; and I'm curious if there's some reason for that magic 596523 number.) PS: I encountered this when writing these utility functions for converting to and from unix time in Mathematica: (* Using Unix time (an integer) instead of Mathematica's AbsoluteTime... *) tm[x___] := AbsoluteTime[x] (* tm is an alias for AbsoluteTime. *) uepoch = tm[{1970}, TimeZone->0]; (* unixtm works analogously to tm. *) unixtm[x___] := Round[tm[x]-uepoch] (* tm & unixtm convert between unix & *) unixtm[x_?NumericQ] := Round[x-uepoch] (* mma epoch time when given numeric *) tm[t_?NumericQ] := t+uepoch (* args. Ie, they're inverses. *)

    Read the article

  • Is PThread a good choice for multi-platorm C/C++ multi-threading program?

    - by RogerV
    Been doing mostly Java and smattering of .NET for last five years and haven't written any significant C or C++ during that time. So have been away from that scene for a while. If I want to write a C or C++ program today that does some multi-threading and is source code portable across Windows, Mac OS X, and Linux/Unix - is PThread a good choice? The C or C++ code won't be doing any GUI, so won't need to worry with any of that. For the Windows platform, I don't want to bring a lot of Unix baggage, though, in terms of unix emulation runtime libraries. Would prefer a PThread API for Windows that is a thin-as-possible wrapper over existing Windows threading APIs. ADDENDUM EDIT: Am leaning toward going with boost:thread - I also want to be able to use C++ try/catch exception handling too. And even though my program will be rather minimal and not particularly OOPish, I like to encapsulate using class and namespace - as opposed to C disembodied functions.

    Read the article

  • How to specify "PG-USERNAME" in pg_ident.conf so that it'll match any database user ?

    - by felace
    I need to restrict a specific unix user so that it can login with only a few select postgres usernames (with password prompt), but allowing every other user to use whatever pg username they want. Assuming restrUnixUser is the unix user name and restrUser is one of the postgres users it may use, and AllowedDB is the only database they should connect to : pg_hba.conf : local AllowedDB restrUser password local all restrUser reject local all all ident map=exceptrestrUser And pg_ident.conf : exceptrestrUser /^(?!restrUnixUser).*$ user1 exceptrestrUser /^(?!restrUnixUser).*$ user2 exceptrestrUser /^(?!restrUnixUser).*$ postgres does what I exactly want to do right now, however, I'll probably add a lot more users so I wonder if there is something like mapname unixuserpattern allpgusers that'll match with whatever username used to login by any unix user matching the pattern.

    Read the article

  • git on HTTP with gitolite and nginx

    - by Arnaud
    I am trying to setup a server where my git repo would be accessible with HTTP(S). I am using gitolite and nginx (and gitlab for web interface but I doubt it makes any difference). I have searched the whole afternoon and I think I'm stuck. I have think I have understood that nginx needs fcgiwrap to work with gitolite, so I tried several configurations, but none of them work. My repositories are at /home/git/repositories. Here's the three nginx configurations I have tried. 1: location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 2: location ~ /git(/.*) { fastcgi_pass localhost:9001; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 3: location ~ ^.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /home/git/repositories/; } location ~ ^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack)$ { root /home/git/repositories; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; include /etc/nginx/fcgiwrap.conf; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/projectname.git/info/refs fatal: HTTP request failed and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Also note that with any of those configurations, when I try to clone with a project name that actually doesn't exist, I get a 502 error. Does anyone already succeeded in doing this? What am I doing wrong? Thanks. UPDATE: nginx error log file said: 2012/04/05 17:34:50 [crit] 21335#0: *50 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 192.168.12.201, server: myservername, request: "GET /git/oct_editor.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" So I changed permissions for /var/run/fcgiwrap.socket, and now I have : > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 403 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Here is the error.log file I have now: 2012/04/05 17:36:52 [error] 21335#0: *78 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/git-core/git/projectname.git/info)" while reading response header from upstream, client: 192.168.12.201, server: myservername, request: "GET /git/projectname.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" I keep on investigating.

    Read the article

  • Oracle on Windows / .NET ??(2010?12?)

    - by Yusuke.Yamamoto
    Oracle Database ? Windows Server / .NET ???????????????????????????????????????? 12?~1???????????????? Oracle on Windows / .NET ???????????????! ???????????????????? ?? ????? Windows Server / .NET ???????? Oracle Database ? Windows Server Oracle Database ? .NET Oracle on Windows / .NET ?????? ????? Windows Server / .NET ???????? Oracle=Linux / UNIX ?? ?Oracle Database ????? Linux / UNIX ?????????????????????? ???????? Windows RDBMS ?????????????????? Oracle on Windows ???No.1???!2,000???????! ????????????????????????????????????????????????? ???????????????? Windows ???????&?????????!? ?????????Windows Server ?????????UNIX ? Oracle Database ?????????????????????????(????????·??????)? ????Windows Server ???(Active Directory, MSCS, VSS, etc)????????????????????? ????????1????!Windows Server 2008??Oracle Database 11g???????? ???.NET ??????????????????????? Oracle Data Provider for .NET ????????Oracle Database ???·?????????????????????? ???????/???1????!.NET + Oracle Database 11g ???????????? Oracle Database ? Windows Server / .NET ?????????????????????????? Oracle on Windows / .NET ????????·Tips??????????????! Oracle Database ? Windows Server Windows ?????? Oracle Database 11g Release 2 ??????|????????????????????! Oracle Database 11g Release 2 ????? ???:??????|??????|???????? OTN Windows Server System Center Windows ? Oracle Database ??? " ?????????????? SQL Server ????? / SQL Server ?????? ???!?SQL Server????????????????(??) SQL Server ?? Oracle Database ????????????? ??? SQL Server ??????????????????????????????????? " ?????????????? Oracle Database ? .NET .NET ?????? Oracle Data Access Components(ODAC) ??????|????????????????????! .NET and Windows Application Development ????? ???:.NET??? OTN .NET Developer Center .NET ? Oracle Database ??????????? " ?????????????? Visual Studio ?? Oracle Database ?????????? " ?????????????? Oracle on Windows / .NET ?????? ???????????????????? ????(Oracle Direct Seminar)????????????????????????????????????????? ??????????? View RSS feed ?????

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >