Search Results

Search found 69138 results on 2766 pages for 'oracle data mining'.

Page 534/2766 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • PC won't boot from IDE HDD when SATA data drive connected

    - by Kevin
    I have an old Pentium 4 system running XP. The machine is set up as an HTPC. It was set up and running well with 1 SATA drive as a boot drive, another SATA drive to store TV recordings, and an IDE drive to store more recordings. Last week the original boot drive (a SATA drive) failed. The BIOS would no longer recognize it. I had a disused IDE drive hanging around that was large enough for the OS, so I reformatted it and installed XP on it. Now the system will only boot if I do not connect the remaining healthy SATA data drive. All three drives are recognized by the BIOS, and I have set the boot order so that the IDE drive with XP on it has top priority, but after the BIOS recognizes the drives, etc. I just get a black screen. I know the SATA drive is functional, because if I hot plug the drive AFTER the system is booted (I know I'm not supposed to do this), I can go into the control panels and mount the drive, and see all the files and folders on it in Windows Explorer. Any suggestions on what is going on and how to fix it? Many thanks.

    Read the article

  • Use old raid drive as boot device without data loss

    - by Gabriel
    There were two disks in sw-raid. There were /dev/md1 as swap, /dev/md2 as boot and a /dev/md3 with ext4. The sw-raid was disabled by stopping and removing mdadm and then zeroing the superblock on each /dev/mdX partition with: sudo mdadm --zero-superblock /dev/sda1 sudo mdadm --zero-superblock /dev/sda2 sudo mdadm --zero-superblock /dev/sda3 In the disk that is the first boot device, I don't know if it's relevant, the system type of each partition was set back from fd to 82 or 83 with fdisk, /etc/fstab was updated, changing /dev/mdX to /dev/sdaX, and grub was reinstalled on the boot partition (/dev/sda2) with grub-instal. But the system wont boot. What else should I do to use this disk as the boot device without reinstall or data loss? Current output of fdisk Device Boot Start End Blocks Id System /dev/sda1 2048 33556480 16777216+ 82 Linux swap / Solaris /dev/sda2 * 33558528 34607104 524288+ 83 Linux /dev/sda3 34609152 3907027120 1936208984+ 83 Linux With it doesn't boot I mean that it stops in the grub console (with the grub> symbol). A ls command says: (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1) It's weird because hd1 was formatted with ext4...

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • D-Link wireless router losing outbound data

    - by gsteinert
    I have a Linux box running the Apache web server behind a D-Link wireless router (nothing fancy, just standard kit that comes with Virgin Media broadband). My issue is that when requesting web pages (from within the network or via the web), the back end of the page seems to be being dropped. For example, I tried to display a text-only file, and all I could get was the first 40-70% of the file (it changed slightly with each refresh). The apache access logs show that only part of the data was being sent (~6000 bytes instead of the 12000+ bytes of the file). Removing my router from the equation fixes the issue and I can download any files no matter the size with no problems. My theory is that the uploaded packets are either being dropped or held up by the config of the router. Is there anything I can do to alleviate the problem? (Perhaps a way of reconfiguring the router to upload packets harder/better/faster/stronger or an option in apache that provides a workaround) As a last resort I will get a second NIC for my Linux box and turn it into a router, but that would mean the box will be on 24/7... not the most ideal of circumstances. Gary

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Using GPO to collect data about VMware view activity

    - by MoSiAc
    Our security group wants us to begin logging data for external access to our view enviroment. At first we thought that view security would be logging all source ip's that are external in nature so if for some reason there is an intrusion we would have record of it there. Of course our firewall logs all that information but correlating it to view is sketchy at best with our current implementation. We know on viewdesktops there is a set of keys in VolitateEnviroment that contains stuff such as source ip and username, etc. We have a script in place that, when run as a logon script attached to a user account in AD collects the information as we need it. If we have a GPO run the same script the information does not get collected. We feel like there is a piece of the puzzle we're missing but we don't know what. If anyone knows what we're forgetting or misconfiguring that would be great, or if you have a better way of us collecting external source ip's for view specifically we'd be interested in that as well. Thanks, EDIT CODE Batch script to dump to text file @echo off timeout 20 echo %computername%/%username% %time% %date% c:\vdi\vmware.txt echo ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~c:\vdi\vmware.txt reg query "HKEY_CURRENT_USER\Volatile Environment" /v "ViewClient_LoggedOn_Username"c:\vdi\vmware.txt reg query "HKEY_CURRENT_USER\Volatile Environment" /v "ViewClient_IP_Address"c:\vdi\vmware.txt echo.c:\vdi\vmware.txt VB Script to display values Const HKEY_CURRENT_USER = &H80000001 Set wmiLocator=CreateObject("WbemScripting.SWbemLocator") Set wmiNameSpace = wmiLocator.ConnectServer(".", "root\default") Set objRegistry = wmiNameSpace.Get("StdRegProv") sPath = "Volatile Environment" lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_Machine_Name", vMachine) lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_IP_Address", vIP) lRC = objRegistry.GetStringValue(HKEY_CURRENT_USER, sPath, "ViewClien_MAC_Address", vMAC) msgbox "The Remote Device Name is " & vMachine & " @ " & vIP & " (" & vMAC & ") " he wanted me to mention that the batch file actually runs and I can see it counting down when I reconnect but it does not grab the registry values.

    Read the article

  • Hosting options for data-enabled web application

    - by Hertfordian
    I am independently developing an asp.net business application with a MySQL database. I currently have a Windows web hosting account which includes MySQL and MS SQL as installed supported options. I am not yet finally committed to using MySQL and I want to keep my options open to evaluate MS SQL and possibly other options such as PostGreSQL later when more of the business logic is in place - my data access layer will handle the database connectivity. The web hosting setup I have now is fine for development purposes, but if in future I want to use, say, PostGreSQL Server, and a level of usage of, say, 10,000 hits per day concentrated in business hours, I'm assuming I'll need a dedicated server. But in that case, should I just install PostGreSQL on the dedicated server, or is best practice to have a separate database server - perhaps locked down so that it can only be accessed through the web server? And supposing it was only 2000 hits a day - how would that change things? I'd appreciate it if anyone could point me in the direction of a useful guide to these sorts of issues. Naturally if I start paying for separate servers, I would like to know exactly why I'm doing it and what the performance issues and thresholds are.

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Can I recover a non-system disk deleted during 2008 R2 setup?

    - by serialhobbyist
    I've done a truly stupid thing and 'deleted' the data disk on a Server 2008 R2 box. Can I recover it? If so, how? I was rebuilding the box because a motherboard change had broken things. I've built loads of boxes and was going through the standard stuff without much concentration. I got to the disk screen which normally displays the two paritions on the drive: the recovery one and the system one. As normal, I deleted the two things I saw. It was only when two lots of unallocated space didn't merge into one that the full horror of what I'd done hit me. Yes, I've got backups... of the stuff I have space to back up. The real irony is that, earlier in the day, I'd ordered to 1 TB disks to deal with the problem. So, anyway, I'd really like to get this partition back because it'll save me a lot of time. How can I do it?

    Read the article

  • NullPointerException, Collections not storing data?

    - by Elliott
    Hi there, I posted this question earlier but not with the code in its entirety. The coe below also calls to other classes Background and Hydro which I have included at the bottom. I have a Nullpointerexception at the line indicate by asterisks. Which would suggest to me that the Collections are not storing data properly. Although when I check their size they seem correct. Thanks in advance. PS: If anyone would like to give me advice on how best to format my code to make it readable, it would be appreciated. Elliott package exam0607; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.net.URL; import java.util.Collection; import java.util.Scanner; import java.util.Vector; import exam0607.Hydro; import exam0607.Background;// this may not be necessary???? FIND OUT public class HydroAnalysis { public static void main(String[] args) { Collection hydroList = null; Collection backList = null; try{hydroList = readHydro("http://www.hep.ucl.ac.uk/undergrad/3459/exam_data/2006-07/final/hd_data.dat");} catch (IOException e){ e.getMessage();} try{backList = readBackground("http://www.hep.ucl.ac.uk/undergrad/3459/exam_data/2006-07/final/hd_bgd.dat"); //System.out.println(backList.size()); } catch (IOException e){ e.getMessage();} for(int i =0; i <=14; i++ ){ String nameroot = "HJK"; String middle = Integer.toString(i); String hydroName = nameroot + middle + "X"; System.out.println(hydroName); ALGO_1(hydroName, backList, hydroList); } } public static Collection readHydro(String url) throws IOException { URL u = new URL(url); InputStream is = u.openStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader b = new BufferedReader(isr); String line =""; Collection data = new Vector(); while((line = b.readLine())!= null){ Scanner s = new Scanner(line); String name = s.next(); System.out.println(name); double starttime = Double.parseDouble(s.next()); System.out.println(+starttime); double increment = Double.parseDouble(s.next()); System.out.println(+increment); double p = 0; double nterms = 0; while(s.hasNextDouble()){ p = Double.parseDouble(s.next()); System.out.println(+p); nterms++; System.out.println(+nterms); } Hydro SAMP = new Hydro(name, starttime, increment, p); data.add(SAMP); } return data; } public static Collection readBackground(String url) throws IOException { URL u = new URL(url); InputStream is = u.openStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader b = new BufferedReader(isr); String line =""; Vector data = new Vector(); while((line = b.readLine())!= null){ Scanner s = new Scanner(line); String name = s.next(); //System.out.println(name); double starttime = Double.parseDouble(s.next()); //System.out.println(starttime); double increment = Double.parseDouble(s.next()); //System.out.println(increment); double sum = 0; double p = 0; double nterms = 0; while((s.hasNextDouble())){ p = Double.parseDouble(s.next()); //System.out.println(p); nterms++; sum += p; } double pbmean = sum/nterms; Background SAMP = new Background(name, starttime, increment, pbmean); //System.out.println(SAMP); data.add(SAMP); } return data; } public static void ALGO_1(String hydroName, Collection backgs, Collection hydros){ //double aMin = Double.POSITIVE_INFINITY; //double sum = 0; double intensity = 0; double numberPN_SIG = 0; double POSITIVE_PN_SIG =0; //int numberOfRays = 0; for(Hydro hd: hydros){ System.out.println(hd.H_NAME); for(Background back : backgs){ System.out.println(back.H_NAME); if(back.H_NAME.equals(hydroName)){//ERROR HERE double PN_SIG = Math.max(0.0, hd.PN - back.PBMEAN); numberPN_SIG ++; if(PN_SIG 0){ intensity += PN_SIG; POSITIVE_PN_SIG ++; } } } double positive_fraction = POSITIVE_PN_SIG/numberPN_SIG; if(positive_fraction < 0.5){ System.out.println( hydroName + "is faulty" ); } else{System.out.println(hydroName + "is not faulty");} System.out.println(hydroName + "has instensity" + intensity); } } } THE BACKGROUND CLASS package exam0607; public class Background { String H_NAME; double T_START; double DT; double PBMEAN; public Background(String name, double starttime, double increment, double pbmean) { name = H_NAME; starttime = T_START; increment = DT; pbmean = PBMEAN; }} AND THE HYDRO CLASS public class Hydro { String H_NAME; double T_START; double DT; double PN; public double n; public Hydro(String name, double starttime, double increment, double p) { name = H_NAME; starttime = T_START; increment = DT; p = PN; } }

    Read the article

  • Using PHP OCI8 with 32-bit PHP on Windows 64-bit

    - by christopher.jones
    The world migration from 32-bit to 64-bit operating systems is gaining pace. However I've seen a couple of customers having difficulty with the PHP OCI8 extension and Oracle DB on Windows 64-bit platforms. The errors vary depending how PHP is run. They may appear in the Apache or PHP log: Unable to load dynamic library 'C:\Program Files (x86)\PHP\ext\php_oci8_11g.dll' - %1 is not a valid Win32 application. or Warning oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that PATH includes the directory with Oracle Instant Client libraries Other than IIS permission issues a common cause seems to be trying to use PHP with libraries from an Oracle 64-bit database on the same machine. There is currently no 64-bit version of PHP on http://php.net/ so there is a library mismatch. A solution is to install Oracle Instant Client 32-bit and make sure that PHP uses these libraries, while not interferring with the 64-bit database on the same machine. Warning: The following hacky steps come untested from a Linux user: Unzip Oracle Instant Client 32-bit and move it to C:\WINDOWS\SYSWOW64\INSTANTCLIENT_11_2. You may need to do this in a console with elevated permissions. Edit your PATH environment variable and insert C:\WINDOWS\SYSTEM32\INSTANTCLIENT_11_2 in the directory list before the entry for the Oracle Home library. Windows makes it so all 32-bit applications that reference C:\WINDOWS\SYSTEM32 actually see the contents of the C:\WINDOWS\SYSWOW64 directory. Your 64-bit database won't find an Instant Client in the real, physical C:\WINDOWS\SYSTEM32 directory and will continue to use the database libraries. Some of our Windows team are concerned about this hack and prefer a more "correct" solution that (i) doesn't require changing the Windows system directory (ii) doesn't add to the "memory" burden about what was configured on the system (iii) works when there are multiple database versions installed. The solution is to write a script which will set the 64-bit (or 32-bit) Oracle libraries in the path as needed before invoking the relevant bit-ness application. This does have a weakness when the application is started as a service. As a footnote: If you don't have a local database and simply need to have 32-bit and 64-bit Instant Client accessible at the same time, try the "symbolic" link approach covered in the hack in this OTN forum thread. Reminder warning: This blog post came untested from a Linux user.

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Oracle : « Le Cloud reprend le meilleur des Mainframes » et en corrige les défauts, à condition qu'il s'appuie sur des standards ouverts

    Oracle : « Le Cloud reprend le meilleur des Mainframes » Et en corrige les défauts, à condition qu'il s'appuie sur des standards ouverts Il y a environ 7 ans, Oracle a entamé un virage stratégique. Son but était de simplifier les déploiements et les architectures IT. Aujourd'hui, l'éditeur aux multiples casquettes (BI, BPM, Hardware, SGBD, Java, etc.) est en train d'en faire un deuxième. Celui du Cloud . Et toujours sous le signe de la simplification. « Le meilleur Cloud sera complètement transparent pour les utilisateurs », prédit Andrew Sutherland, le cordial (et écossais) Senior Vice-Président Fusion Middleware Europe, de passage ce matin à Paris. Sous-entendu, t...

    Read the article

  • Le projet Hudson officiellement forké après le vote de la communauté open-source, Oracle continuera à développer « son » projet Hudson

    Le projet Hudson officiellement forké Après le vote de la communauté open-source et devient Jenkins, Oracle continuera de développer son projet Mise à jour du 02/02/11 La communauté a voté. Et à une écrasante majorité (214 contre 14), elle a validé le transfert du projet Hudson vers les Google Groups (pour les mailing lists et les discussions) et vers GitHub pour le code (lire ci-avant). En conflit avec Oracle sur la propriété du nom, les responsables du projets avaient également proposé de le rebaptiser Projet Jenkins. Une appellation elle aussi acceptée, comme en témoigne le nouveau site d'un des serveurs d'intégration contin...

    Read the article

  • EPM 11.1.2 - Configure a data source to support Essbase failover in active-passive clustering mode

    - by Ahmed A
    To configure a data source to support Essbase fail-over in active-passive clustering mode, replace the Essbase Server name value with the APS URL followed by the Essbase cluster name; for example, if the APS URL is http://<hostname>:13090/aps and the Essbase cluster name is EssbaseCluster-1, then the value in the Essbase Server name field would be:http://<hostname>:13090/aps/Essbase?clusterName=EssbaseCluster-1Note: Entering the Essbase cluster name without the APS URL in the Essbase Server name field is not supported in this release.

    Read the article

  • ArchBeat Link-o-Rama for August 1, 2013

    - by OTN ArchBeat
    Performance Tuning – Systems Running BPEL Processes | Ravi Saraswathi and Jaswant Sing Ravi Saraswathi and Jaswant Singh, the authors of "Oracle SOA BPEL Process Manager 11gR1 - A Hands-on Tutorial" explain performance tuning of SOA composite applications for optimal performance and scalability. Steps to configure SAML 2.0 with Weblogic Server | Puneeth The blogger known only as Punteeth shares an illustrated technical post that will be of interest to those working with Oracle WebLogic and the Security Assertion Markup Language (SAML). Video: Planning and Getting Started - Developer PCs | Chris Muir Tune in to the latest episode of ADF Architecture TV to see Chris Muir explain why you don't have to buy the most expensive PCs in order to run JDeveloper. Key User Experience Design Principles for working with Big Data | John Fuller User Experience Designer John Fuller shares 6 core design principles for working with big data that focus on "helping people bring together a variety of data types in a fast and flexible way." Event: OTN Developer Day: ADF Mobile - Burlington, MA - Aug 28 Through six sessions, including a hands-on workshop, you'll learn a simpler way to leverage your existing skills to develop enterprise mobile applications using Oracle ADF Mobile. Registration is free, but seating is limited. Optimizing WebCenter Portal Mobile Delivery | Jeevan Joseph FMW solution architect Jeevan Joseph "walks you through identifying and analyzing some common WebCenter Portal performance bottlenecks related to page weight and describes a generic approach that can streamline your portal while improving the performance and response times." Customizing specific instances of a WebCenter task flow | Jeevan Joseph Fusion Middleware A-Team solution architect Jeevan Joseph strikes again with this article that explains "how to set up parameters on MDS customization so that it is applied only under certain conditions...making it possible to customize individual instances of task flows." Exalogic Virtual Tea Break Snippets – Modifying Memory, CPU and Storage on a vServer | Andrew Hopkinson FMW solution architect Andrew Hopkinson walks you through "the simple process of resizing the resources associated with an already existing Exalogic vServer." Oracle ADF Mobile Virtual Developer Day - Next Week | Shay Shmeltzer JDeveloper product team lead Shay Schmeltzer shares agenda information for the OTN Virtual Developer Day event covering Mobile Application Development for iOS and Android, coming up one week from today, on August 7, 2013, 9am PT/Noon ET/1pm BRT. What's New In Oracle Enterprise Pack for Eclipse 12.1.2.1.0? New features and updates on the newly-released Oracle Enterprise Pack for Eclipse 12.1.2.1.0, now available for download from OTN. IOUG Cloud Builders Unite | Jeff Erickson Check out this great Oracle Magazine article by Jeff Erickson about IOUG members organizing around their common interest in building private clouds. Thought for the Day "Stuff that's hidden and murky and ambiguous is scary because you don't know what it does." — Jerry Garcia (August 1, 1942 – August 9, 1995) Source: brainyquote.com

    Read the article

  • A brief introduction to BRM and architecture

    - by Yani Miguel
    Oracle Communications Billing and Revenue Management (Oracle BRM) is the telcos industry´s leading solution intended for communications service providers. This post encourages to know BRM starting with the basics. History Portal was a billing and revenue managament solution to communications industry created by Portal Software. In 2006 Oracle acquired Portal Software and the solution was renamed BRM. Today Oracle BRM is the first end-to-end packaged enterprise software suite for the communications industry, however BRM is just one more product in the catalog of OSS solutions that Oracle offers. BRM can bill and manage all communications services including wireline, wireless, broadband, cable, voice over IP, IPTV, music, and video. BRM Architecture BRM´s architecture consists of 4 layers or tiers. Through these layers are the data, bussines logic and interfaces to connect graphical client tools.Application tier This layer provides GUI client tools enabling communication to other layers through open APIs. Some BRM client applications are: Customer Center Pricing Center Universal Event Loader Web Server BRM Billing Application Collections Center Permissioning Center Furthermore, this layer is where are provided real-time external events. Bussines Process Tier Although all layers are equally important, I think it deserves more atention because in this tier BRM functionality is implemented. All functions that give life to BRM are in this layer coded in C language called Opcodes (System Processes in the image). Any changes or additional functionality should be made here, so when we try to customize the product, we will most of the time programming in this layer (Business Policies in the image).Bussines Process Tier Features: Implements Portal system functionalityValidates data from the application tierModifies Portal behavior through business policies. Business policies can by customized.Triggers external systems using event notification. Object Tier This layer is responsible for transfer the BRM requests into database language and translate BRM requests into external system requests. Without it, the business logic (data from Bussines Process Tier) could not be understood by the relational database. Data tier Data tier is responsable for the storage of BRM database and other external systems databases. External systems include credit card, tax, and directory servers. Finally, It's important to note that BRM is designed to easily integrate with the following solutions:AIA 2.4 Siebel CRM E-Business Suite - G/L onlyCommunications Services Gatekeeper Oracle BI Publisher. Personally, I think that BRM could improve migrating client-server architecture to a fully web platform that works with Oracle Middleware like any product of the Fusion Middleware family. Hopefully there are already initiatives in this area.

    Read the article

  • Restart mysql keeping the data

    - by sitonico
    I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) -- Some links of similar problems https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/573318 http://www.linuxquestions.org/questions/linux-newbie-8/lamp-install-on-lucid-mysqld-sock-missing-mysql-terminating-status%3D1-853152/ It seems it's a permissions problem... But I don't know which permissions I should change... SOLVED -- mysql error 2002 "cannot connect to socket"

    Read the article

  • A Complete Customer Experience Solution (3 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development In my previous post, I talked about taking three concrete steps to improve your customers' overall experiences: 1) understand your customer, 2) empower your ecosystem, and 3) adapt your business. To do these effectively and efficiently, it's important to find the right technology that can bridge the gaps across your channels, interactions, departments, and repositories. Oracle has spent the past three years and more than six billion dollars acquiring and developing some of the world's best-of-breed applications. The result is the most comprehensive customer experience (CX) portfolio offering in the World - bar none: ATG Best in Class Selling Experiences Fatwire Best in Class Marketing Experiences Inquira Best in Class Support Experiences Endecca Best in Class Search Experiences RightNow Best in Class Service Experiences Vitrue & Involver Best in Class Social Marketing Collective Intellect Best In Class Social Listening We don't expect organizations to eat the CX elephant in one bite, nor should they try to. There are key strategic initiatives within each of the four main pillars of our customer experience offering for which we deliver solutions: 1. Customer Experience for Marketing Social Listening and Engagement Social Marketing Marketing Websites Demand Generation and Lead Management Marketing and Loyalty Management 2. Customer Experience for Commerce Search, Navigation & Content Delivery Cross-Channel Commerce Targeting & Product Recommendations Social Commerce Order Management & Fulfillment Retail Store Operations 3. Customer Experience for Sales Sales Force Automation Social Selling Territory & Quota Management Revenue Forecasting Partner Relationship Management Quote to Cash Incentive Compensation 4. Customer Experience for Service Cross-Channel Customer Service Knowledge Management Social Customer Service Eligibility Management Contracts, Assets, and Entitlements Industry-Specific Solutions eBilling Oracle's customer experience portfolio is socially infused at each layer of our pillars rather than simply bolted on as a side process. This combines with the power of the Cloud to run the parts of the solution that need the access, efficiency, and agility from a managed infrastructure. You can get the compliance control from on-premise backbone infrastructure systems that run your business and don't change that often. Please take advantage of our teams of Oracle customer experience professionals and our key agency and technology partner ecosystem. They can help you develop strategic solution roadmaps that build and deliver customer experience and that are tailored to your business needs and objectives. No one has built a better customer service portfolio to manage the entire customer journey than Oracle. It is backed by CX thought leadership programs, a commitment from our executives, and a worldview that your technology decisions must be driven by your customer experiences to succeed. If you’d like to follow up on this conversation, please leave a comment or contact me at [email protected]. You can get more information on Oracle’s complete customer experience solution here.

    Read the article

  • WebCenter Customer Spotlight: Ferrous Resources do Brasil S.A.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryFerrous Resources do Brasil S.A. (Ferrous) is a startup company whose core business is the exploration, prospection, exploitation, and commercialization of iron ore. They wanted to create an effective, secure and scalable document management system to support the company’s new iron ore exploration operations in Brazil. Ferrous worked with the Oracle Partner 2D Tecnologia to implement a centralized document management system using  Oracle WebCenter Content. The single repository hold almost 220,000 files with an expected to growth to 8 million files in the next two years.  The solution has reduced  financial audit reporting from two weeks to only four days. Company OverviewFounded in 2007, Ferrous Resources do Brasil S.A. (Ferrous) is a startup company whose core business is the exploration, prospection, exploitation, and commercialization of iron ore. Ferrous intends to become one of the five largest iron ore mining companies in the world within the next few years.  Business ChallengesFerrous wanted to create an effective, secure and scalable document management system to support the company’s new iron ore exploration operations in Brazil. Solution DeployedFerrous worked with the Oracle Partner 2D Tecnologia to implement a centralized document management system using  Oracle WebCenter Content. They consolidated all company documents into a single repository to hold almost 220,000 files, including iron-ore project layout and pictures for a repository that is expected to grow to 8 million files in the next two years. Business Results Gained access to reports on individual files of pictures, project layouts, text files, spreadsheets, and slides–enabling the company to find out who opened and altered each  file and when, as well as to access previous versions Enabled investors and board of directors abroad to access all company documents via a Web portal, something that was previously achieved only through e-mails or CD file transfers Enabled the company to consolidate all files, which were mostly disseminated in pen drives and desktops, so that they are now available to more than 500 system users, including investors, lawyers, partners, and 320 in-company users Reduced time to search specific documents, saving several days in financial audit reporting, an activity that previously took two weeks and now requires only four days  “With Oracle WebCenter Content, we managed to organize, control, and protect the company’s files since the beginning of operations and, as a consequence, can offer rapid and transparent access to all company documents.” Frederico Samartini, Business Performance Manager, Ferrous Resources do Brasil S.A. Additional Information Ferrous Customer Snapshot Oracle WebCenter Content

    Read the article

  • WebCenter Customer Spotlight: Instituto Mexicano de la Propiedad Industrial

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryInstituto Mexicano de la Propiedad Industrial (IMPI) is a decentralized  federal agency with the goals of protecting and ensuring awareness of industrial property rights in Mexico. IMPI  business objectives were to increase efficiency, improve client service, accelerate services to the public and reduce paper use by digitizing management of necessary documentation for patent and trademark submissions and approvals. IMPI  implemented  Oracle WebCenter Content to develop electronic inquiry service by digitizing and managing documents and a public Web site making patent-related information easily available online. With the implemented solution IMPI increased the number of monthly inquires from 200 in person consultations to 80,000 electronic consultations and the number of trademark record inquiries from 30,000 to 300,000. Company OverviewInstituto Mexicano de la Propiedad Industrial (IMPI) is a decentralized federal agency with the goals of protecting and ensuring awareness of industrial property rights in Mexico. IMPI is responsible for registering and publicizing inventions, distinctive signs, trademarks, and patents. In addition to its Mexico City headquarters, IMPI has five regional offices.  Business Challenges IMPI  business objectives were to increase efficiency by automating internal operations and patent and trademark-related procedures and services, improve client service by simplifying patent and trademark procedures, accelerate services to the public and reduce paper use by digitizing management of necessary documentation for patent and trademark submissions and approvals. Solution DeployedIMPI worked with Oracle Consulting to implement Oracle WebCenter Content to develop electronic inquiry service - services that were previously provided in person only - by digitizing and managing documents. They use Oracle Database 11g, Enterprise Edition to manage data for all mission-critical systems, automating patent and trademark transactions, providing consistent, readily available, and accurate data. IMPI developed a Web site to support newly digitized information with simple and flexible interfaces, making patent-related information easily available online to the public. Business ResultsWith the implemented solution IMPI increased the number of monthly inquires  from 200 in person consultations to 80,000 electronic consultations and the number of trademark record inquiries from 30,000 to 300,000. “Oracle WebCenter Content structure is unique. It lets us separately manage communication with other applications and databases, and performs content management itself. It’s a stable tool, at an appropriate cost, that lets us develop and provide reliable electronic services.” Eugenio Ponce de León, Divisional Director of Systems and Technology, Instituto Mexicano de la Propiedad Industrial Additional Information Instituto Mexicano de la Propiedad Customer Snapshot Oracle WebCenter Content

    Read the article

  • l2tp server always 'sent [CCP ResetReq id=0x3]' when got compressed data request

    - by wilbur
    I have built a xl2tpd/ipsec server on my ubuntu 12.04.3, and I managed to make a l2tp vpn connection to the xl2tpd server from my android phone. The xl2tpd log said xl2tpd[10828]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[10828]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[10828]: setsockopt recvref[22]: Protocol not available xl2tpd[10828]: This binary does not support kernel L2TP. xl2tpd[10828]: xl2tpd version xl2tpd-1.2.8 started on atime.me PID:10828 xl2tpd[10828]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[10828]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[10828]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[10828]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[10828]: Listening on IP address 0.0.0.0, port 1701 xl2tpd[10828]: control_finish: Peer requested tunnel 39154 twice, ignoring second one. xl2tpd[10828]: Connection established to 117.136.8.59, 43149. Local: 25339, Remote: 39154 (ref=0/0). LNS session is 'default' However I cannot access the web in my browser. The pppd log said rcvd [Compressed data] 00 1d 82 c4 7c 04 d8 09 ... sent [CCP ResetReq id=0x7] I have googled a lot and found that this was mostly caused by a mppe decompression error. I have disabled BSD-Compress compression with nobsdcomp in /etc/ppp/xl2tpd-options but it did not work. I used openswan-2.6.33 and xl2tpd-1.2.8 which were built from source. And my configurations: /etc/ipsec.conf version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=106.186.121.214 leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = yes [lns default] local ip = 10.10.11.1 ip range = 10.10.11.2-10.10.11.245 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/xl2tpd-options length bit = yes /etc/ppp/xl2tpd-options require-mschap-v2 ms-dns 8.8.8.8 ms-dns 8.8.4.4 asyncmap 0 auth crtscts lock hide-password modem name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4 debug nobsdcomp Any suggestions? Thanks in advance.

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >