Search Results

Search found 17952 results on 719 pages for 'oracle vm'.

Page 389/719 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • ???????????!???·???

    - by Kumiko Fujita
    “???????????!”???? “???????????!”????????????·????????????????????????????????????????????????????????????? ???????????????????????????????????????????????! ???????·??? ???????????IT???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????·???????/?????????????????????????????????????????????????????????????! ???? ????? ????? ??????????????/??/??? ??????????????? PDF??(WMV)??(MP4) ????????????????????/?? ???????????????????? PDF??(WMV)??(MP4) ??????????????????? ?????????~????/????????~ PDF??(WMV)??(MP4) ???????????????????? ?????????????????????????- Oracle ASM Cluster File System (ACFS)????! PDF??(WMV)??(MP4) ??????????????????EM????? ???????? Oracle Enterprise Manager 12c ??? PDF??(WMV)??(MP4) SPARC???????Solaris?????? SPARC ????? ~ OVM ???????! PDF??(WMV)??(MP4) ???? ?Oracle DB 11g R2 ??????????????????/????????????????! Oracle????????

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • vmware - ACE, Workstation - how to manage remoe clients??

    - by tom smith
    Hi. Exploring Vmware products/services and have a few questions. As I understand VM, you can use VMWare Workstation to create a VM of a target machine/box/OS. Let's call this VM, "foo". If I have 100 client PCs in my dept, and I want to install the VM (foo) on each client, and also manage the remote VM instances of (foo). How can I accompish this? Let's assume that the client machines are running Windows7, and have the vmplayer app installed on the box. I'm looking to do the following kinds of actions regarding the remote client machines: -Update the foo VM/image with new updated copies -Make sure that every VM "foo" has the same user, but a unique passwd -Monitor the traffic/status of each client VM "foo" oin each client -Start/Stop each client VN "foo" from the master console -Etc... Can this be accomplished? How would I do it, what services/products would I need? I've tried toalking to a few of the pre-sales guys in VMWare, and got nowhere, other than telling me to email my questions!! Looking at google shed more insight, but I still have questions. So, if you have detailed VMWare understanding, pointers to consultants, or resellers who can help, all pointers are greatly appreciated. Thanks -tom

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle

    Read the article

  • OraOps10.dll loading problem

    - by Rodnower
    Hello, I have ASP.NET web service built on windows 7 in 32 bit. All dependences of this service compiled in Release mode in x64 bit. Now, I'm installed it on windows 8 64 bit and when I'm access to this service I get error "Could not load OraOps10.dll". I doesn't succeed to find any thing about this problem with oracle client in context of x32-x64 bit incompatibility in internet. Have you any idea? Thank you very much.

    Read the article

  • What permissions do I need to run SQL*Loader?

    - by Jason Baker
    What permissions does a database user need to be able to run oracle's sql loader? For instance, since sql loader will disable indexes and triggers, does it need ALTER permissions for those items? This seems like a simple question, but I can't find any documentation on this in the manual.

    Read the article

  • Memory Usage for Databases on Linux

    - by Kyle Brandt
    So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO? If that makes any sense, how do you figure out just how much?

    Read the article

  • problem getting php_oci8 working on linux RHEL 5

    - by Jonathan
    Hi All, I'm installing oracle oci8 on a linux server here and I am having an issue where php_oci8.so does not seem to be able to find the libclntsh.so.11.1. I've got the instant client installed and it shows up fine in ldconfig -p , but when I do ldd on the php_oci8.so it shows up as not found. Does anyone have any ideas as for what I can check ?

    Read the article

  • OAS log files filling up hard drive

    - by Andrew Hampton
    We've had issues with log files for Oracle Application Server filling up the hard drive on our server. The files are in the /network/admin folder and are named server.log_XXXXX.trc and client.log_XXXXX.trc where XXXXX are 5 digits. The files are typically anywhere from 1-2MB in size but can be up to 100MB and thousands of them are created at a rate of about 5-10 per minute. Does anyone know how to disable these logs? Thanks!

    Read the article

  • who are the goldengate extract users

    - by sharif
    I am setting up golden gate, this installation guide is quite confusing as it refers to steps which have not been done or already done previously. I am on step 4.8.1 on the ''oracle installation guide''. I is asking for ''Extract'' user name. I do not recall creating such other than the goldengate user. Also what are the other four users it refers to as in 4.6 Extract Replicat Manager DEFGEN what is the usernames for each of these in the db?

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • How do I convert an AMD module from a singleton to an instance?

    - by Jamie Ide
    I'm trying to convert a working Durandal view model module from a singleton to an instance. The original working version followed this pattern: define(['knockout'], function(ko) { var vm = { activate: activate, companyId: null; company: ko.observable({}) }; return vm; function activate(companyId) { vm.companyId = companyId; //get company data then vm.company(data); } } The new version exports a function so that I get a new instance on every request... define(['knockout'], function(ko) { var vm = function() { activate = activate; companyId = null; company = ko.observable({}); }; return vm; function activate(companyId) { vm.companyId = companyId; //get company data then vm.company(data); } } The error I'm getting is "object function () [...function signature...] has no method company on the line vm.company(data);. What am I doing wrong? Why can I set the property but can't access the knockout observable? How should I refactor the original code so that I get a new instance on every request? My efforts to simplify the code for this question hid the actual problem. My real code was using Q promises and calling two methods with Q.All. Since Q is in the global namespace, it couldn't resolve my viewmodel after converting to a function. Passing the view model to the methods called by Q resolved the problem.

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • Inner or Outer left Join

    - by user1557856
    I'm having difficulty modifying a script for this situation and wondering if someone maybe able to help: I have an address table and a phone table both sharing the same column called id_number. So id_number = 2 on both tables refers to the same entity. Address and phone information used to be stored in one table (the address table) but it is now split into address and phone tables since we moved to Oracle 11g. There is a 3rd table called both_ids. This table also has an id_number column in addition to an other_ids column storing SSN and some other ids. Before the table was split into address and phone tables, I had this script: (Written in Sybase) INSERT INTO sometable_3 ( SELECT a.id_number, a.other_id, NVL(a1.addr_type_code,0) home_addr_type_code, NVL(a1.addr_status_code,0) home_addr_status_code, NVL(a1.addr_pref_ind,0) home_addr_pref_ind, NVL(a1.street1,0) home_street1, NVL(a1.street2,0) home_street2, NVL(a1.street3,0) home_street3, NVL(a1.city,0) home_city, NVL(a1.state_code,0) home_state_code, NVL(a1.zipcode,0) home_zipcode, NVL(a1.zip_suffix,0) home_zip_suffix, NVL(a1.telephone_status_code,0) home_phone_status, NVL(a1.area_code,0) home_area_code, NVL(a1.telephone_number,0) home_phone_number, NVL(a1.extension,0) home_phone_extension, NVL(a1.date_modified,'') home_date_modified FROM both_ids a, address a1 WHERE a.id_number = a1.id_number(+) AND a1.addr_type_code = 'H'); Now that we moved to Oracle 11g, the address and phone information are split. How can I modify the above script to generate the same result in Oracle 11g? Do I have to first do INNER JOIN between address and phone tables and then do a LEFT OUTER JOIN to both_ids? I tried the following and it did not work: Insert Into.. select ... FROM a1. address INNER JOIN t.Phone ON a1.id_number = t.id_number LEFT OUTER JOIN both_ids a ON a.id_number = a1.id_number WHERE a1.adrr_type_code = 'H'

    Read the article

  • OpenStack: How to make Cloudify use the floating IP instead of the fixed one?

    - by polslinux
    I have a problem with Cloudify (both 2.5 and 2.6-rc release). I have an All-In-One Openstack 2013.1.1 setup and I'm trying to use Cloudify to bootstrap a cirros 0.3.1 vm. My quantum configuration is: pool of fixed ip (10.0.0.0/24) for vm management; pool of floating ip (192.168.1.170-190) taken from 192.168.1.1/24 (my lan) When I deploy a vm first, an ip from 10.0.0.0/24 is given (I cannot reach it from my PCs because it is only for vm management) and then I associate a floating ip with which I can ping (and ssh) the deployed machine. The problem is when I do: bootstrap-cloud openstack because Cloudify stay forever into "attempting to access management vm 10.0.0.3" and this is due to the fact that 10.0.0.3 is not reachable. What can I do to get Cloudify take the floating ip instead of the fixed one?

    Read the article

  • no ocijdbc10 in java.library.path

    - by B.Z.B
    Hey all, So I've been plagued by this issue, whenever I try to run my app in eclipse, I get this error. 2011-02-23 09:55:08,388 ERROR (com.xxxxx.services.factory.ServiceInvokerLocal:21) - java.lang.UnsatisfiedLinkError: no ocijdbc10 in java.library.path I've tried following the steps I found here with no luck. I've tried this on a XP VM as well as windows 7 (although in win 7 I get a different error, below) java.lang.UnsatisfiedLinkError: no ocijdbc9 in java.library.path I've made sure my oracle client was ok (by running TOAD) and I also re-added the classes12.jar / ojdbc14.jars to my WEB-INF/lib folder taken directly from my %ORACLE_HOME% folder (also re-added them to the lib path). I've also tried just adding the ojdbc14.jar without the classes12.jar. Any suggestions appreciated. In the XP VM I have my PATH variable set to C:\Program Files\Java\jdk1.6.0_24\bin;C:\ORACLE\product\10.2.0.1\BIN. I'm using Tomcat server 5.0

    Read the article

  • JavaScript malware analysis

    - by begueradj
    I want to test websites for JavaScript malware presence . I plan to develop a Python program that sends the URL of a given website to a virtual machine where the dynamic execution of the eventual malicious JavaScript embedded in the website's page is monitored. My questions: Should my VM be Windows or Linux ? What if the malware damages my VM: is there a hint how to avoid that ? Or launch a new VM automatically instead ? If I use telnet client library to communicate with the VM: must I implement a server within the VM to deal with my queries or can I overcome this ? I am jut looing for hints, general ideas. Thank you for any help.

    Read the article

  • MySQL at Mobile World Congress (on Valentine's Day...)

    - by mat.keep(at)oracle.com
    It is that time of year again when the mobile communications industry converges on Barcelona for what many regard as the premier telecommunications show of the year.Starting on February 14th, what better way for a Brit like me to spend Valentines Day with 50,000 mobile industry leaders (my wife doesn't tend to read this blog, so I'm reasonably safe with that statement).As ever, Oracle has an extensive presence at the show, and part of that presence this year includes MySQL.We will be running a live demonstration of the MySQL Cluster database on Booth 7C18 in the App Planet.The demonstration will show how the MySQL Cluster Connector for Java is implemented to provide native connectivity to the carrier grade MySQL Cluster database from Java ME clients via Java SE virtual machines and Java EE servers.  The demonstration will show how end-to-end Java services remain continuously available during both catastrophic failures and scheduled maintenance activities.The MySQL Cluster Connector for Java provides both a native Java API and JPA plug-in that directly maps Java objects to relational tables stored in the MySQL Cluster database, without the overhead and complexity of having to transform objects to JDBC, and then SQL  The result is 10x higher throughput, and a simpler development model for Java engineers.Stop by the stand for a demonstration, and an opportunity to speak with the MySQL telecoms team who will share experiences on how MySQL is being used to bring the innovation of the web to the carrier network.Of course, if you can't make it to Barcelona, you can still learn more about the MySQL Cluster Connector for Java from this whitepaper and are free to download it as part of MySQL Cluster Community Edition  Let us know via the comments if you have Java applications that you think will benefit from the MySQL Cluster Connector for JavaI can't promise that Valentines Day at MWC will be the time you fall in love with MySQL Cluster...but I'm confident you will at least develop a healthy respect for it  

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >