Search Results

Search found 20265 results on 811 pages for 'oracle bi 11g scorecards dashboards strategy execution'.

Page 409/811 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • OraOps10.dll loading problem

    - by Rodnower
    Hello, I have ASP.NET web service built on windows 7 in 32 bit. All dependences of this service compiled in Release mode in x64 bit. Now, I'm installed it on windows 8 64 bit and when I'm access to this service I get error "Could not load OraOps10.dll". I doesn't succeed to find any thing about this problem with oracle client in context of x32-x64 bit incompatibility in internet. Have you any idea? Thank you very much.

    Read the article

  • WebLogic Server 11g with java based node manager stdout/stderr log path configuration.

    - by Mircea Vutcovici
    Hi, How I can configure the path where the stdout logs are written by the WebLogic server? I've read about -Dweblogic.log.RedirectStdoutToServerLogEnabled=true, but this redirects only part of the output. For example if I will run a thread dump, the output will remain in the original log file. I think it should be an option in nodemanager/startup.properties file. WebLogic version is 10.3.2.0 and I am using a java based node manager. OS is RHEL 5. Thank you, Mircea

    Read the article

  • Apache Solr: What is a good strategy for creating a tag/attribute based search for an image.

    - by Development 4.0
    I recently read an article about YayMicro that descries how they used solr to search their photos. I would like doing something similar (but on a smaller scale). I have figured out how to have solr to search text files, but I would like to learn what the best way to associate images with semi structured/unstructured text. Do I create an xml file with an image link in it? I basically want to input a search string and have it return a grid of images. Yay Micro Article Link

    Read the article

  • What permissions do I need to run SQL*Loader?

    - by Jason Baker
    What permissions does a database user need to be able to run oracle's sql loader? For instance, since sql loader will disable indexes and triggers, does it need ALTER permissions for those items? This seems like a simple question, but I can't find any documentation on this in the manual.

    Read the article

  • Memory Usage for Databases on Linux

    - by Kyle Brandt
    So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO? If that makes any sense, how do you figure out just how much?

    Read the article

  • replacing buffalo lonkstations with FreeNAS, overall backup strategy, am I on the right path?

    - by Shreko
    We've been using 2 Buffalo LinkStations of 320Gb each for shared directory and employee's server storage (around 20 employees). So only documents (word, excel, cad drawings etc.) and database backup of the main application server (ERP, Accounting) 1 buffalo box serves as a main one, located at the server room, next to the main application server and the other buffalo box is located on the opposite side of the building (for fire protection) in a secure storage room and backs up the first one. We also have several external HDs that backs up everything from the buffalo box for an offsite backup. After 3.5 years of using these, capacity is a main limitation, I'm planning a replacement and would like to use FreeNAS (we already use monowall with great success). I would like to keep it simple and continue similar setup, building two low power boxes with 1 hd (2Tb) each. Is low power atom mobo OK? Not sure about HDs? I've read on this site somebody mentioning more seagate ES2 as more reliable and better performing. How would those eco/green drives compare. We've been pretty happy with speed of Buffalo boxes and I don't want my users to notice any slowdown. Any suggestion?

    Read the article

  • OAS log files filling up hard drive

    - by Andrew Hampton
    We've had issues with log files for Oracle Application Server filling up the hard drive on our server. The files are in the /network/admin folder and are named server.log_XXXXX.trc and client.log_XXXXX.trc where XXXXX are 5 digits. The files are typically anywhere from 1-2MB in size but can be up to 100MB and thousands of them are created at a rate of about 5-10 per minute. Does anyone know how to disable these logs? Thanks!

    Read the article

  • problem getting php_oci8 working on linux RHEL 5

    - by Jonathan
    Hi All, I'm installing oracle oci8 on a linux server here and I am having an issue where php_oci8.so does not seem to be able to find the libclntsh.so.11.1. I've got the instant client installed and it shows up fine in ldconfig -p , but when I do ldd on the php_oci8.so it shows up as not found. Does anyone have any ideas as for what I can check ?

    Read the article

  • who are the goldengate extract users

    - by sharif
    I am setting up golden gate, this installation guide is quite confusing as it refers to steps which have not been done or already done previously. I am on step 4.8.1 on the ''oracle installation guide''. I is asking for ''Extract'' user name. I do not recall creating such other than the goldengate user. Also what are the other four users it refers to as in 4.6 Extract Replicat Manager DEFGEN what is the usernames for each of these in the db?

    Read the article

  • What is the best IP/Subnet set up strategy for a multi-server webhosting setup?

    - by Roy Andre
    Sorry for the mixed-up title, but let me try to explain better: We run a hosting solution, which until now has supported shared hosting and VPSes. Easy enough. We are now getting larger clients which require a more complex setup. We have more or less settled the server-setup itself, which will consist of: 1-2 Frontend Proxy/Load balancing servers 2+ Application servers 1 Database server 1 optional Memcached server The issue we are dealing with is to agree on a flexible and easy-to-maintain IP setup. So far we've been into VLAN'ing the internal servers in its own subnet, we've though of assigning an official IP to each server, and so on. What will be the best approach here? Any best practices? Using one official IP on the Frontend server, and then just set up an internal subnet for the servers behind that? We could then just NAT in any eventual sources required to access for instance the DB server directly over 3306.

    Read the article

  • How broken is routing strategy that causes a martian packet (so far only) during tracepath?

    - by lkraav
    I believe I've achieved a table that routes packets from and to eth1/192.168.3.x through 192.168.3.1, and packets from and to eth0/192.168.1.x through 192.168.1.1 (helpful source). Question: when doing tracepath from 192.168.3.20 (from within vserver), I'm getting kernel: [318535.927489] martian source 192.168.3.20 from 212.47.223.33, on dev eth0 at or near the target IP, while intermediary hops go without (log below). I don't understand why this packet is arriving on eth0, instead of eth1, even after reading this: Note that you may see packets from non-routable IP addresses when running the traceroute or tracepath commands. While packets cannot be routed to these routers, packets sent between 2 routers only need to know the address of the next hop within the local networks, which could be a non-routable address. Can someone explain that paragraph in human language? Based on short initial trials so far, everything else seems to work without causing martians. Is this contained to the nature of tracepath operation or do I have some other bigger routing problem that will cause work traffic breakage? Side note: is it possible to inspect martian packet with tcpdump or wireshark or anything of the sort? I'm have not been able to get it to show up on my own. vserver-20 / # tracepath -n 212.47.223.33 1: 192.168.3.2 0.064ms pmtu 1500 1: 192.168.3.1 1.076ms 1: 192.168.3.1 1.259ms 2: 90.191.8.2 1.908ms 3: 90.190.134.194 2.595ms 4: 194.126.123.94 2.136ms asymm 5 5: 195.250.170.22 2.266ms asymm 6 6: 212.47.201.86 2.390ms asymm 7 7: no reply 8: no reply 9: no reply ^C Host routing: $ sudo ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:24:1d:de:b3:5d brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:46:46:a3:6a brd ff:ff:ff:ff:ff:ff inet 192.168.3.2/27 scope global eth1 inet 192.168.3.20/27 brd 192.168.3.31 scope global secondary eth1 # linux-vserver instance $ sudo ip route default via 192.168.1.1 dev eth0 metric 3 unreachable 127.0.0.0/8 scope host 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 192.168.3.0/27 dev eth1 proto kernel scope link src 192.168.3.2 $ sudo ip rule 0: from all lookup local 32764: from all to 192.168.3.0/27 lookup dmz 32765: from 192.168.3.0/27 lookup dmz 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table dmz default via 192.168.3.1 dev eth1 metric 4 192.168.3.0/27 dev eth1 scope link metric 4 Gateway routing # ip route 10.24.0.2 dev tun0 proto kernel scope link src 10.24.0.1 10.24.0.0/24 via 10.24.0.2 dev tun0 192.168.3.0/24 dev br-dmz proto kernel scope link src 192.168.3.1 192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1 $ISP_NET/23 dev eth0.1 proto kernel scope link src $WAN_IP default via $ISP_GW dev eth0.1 Additional background Options for non-virtualized network interface isolation?

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • Strategy for WCF server with .Net clients and Android clients?

    - by D.H.
    I am using WCF to write a server that should be able to communicate with .Net clients, Android clients and possibly other types of clients. The main type of client is a desktop application that will be written in .Net. This client will usually be on the same intranet as the server. It will make an initial call to the server to get the current state of the system and will then receive updates from the server whenever a value changes. These updates are frequent, perhaps once a second. The Android clients will connect over the Internet. This client is also interested in updates, but it is not as critical as for the desktop client so a (less frequent) polling scenario might be acceptable. All clients will have to login to use the services, and when connecting over the Internet the connection should be secure. I am familiar with WCF but I am not sure what bindings are most appropriate for the scenario and what security solution to use. Also, I have not used Android, but I would like to make it as simple as possible for the person implementing the Android client to consume my services. So, what is my strategy?

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • Is this a bad indexing strategy for a table?

    - by llamaoo7
    The table in question is part of a database that a vendor's software uses on our network. The table contains metadata about files. The schema of the table is as follows Metadata ResultID (PK, int, not null) MappedFieldname (char(50), not null) Fieldname (PK, char(50), not null) Fieldvalue (text, null) There is a clustered index on ResultID and Fieldname. This table typically contains millions of rows (in one case, it contains 500 million). The table is populated by 24 workers running 4 threads each when data is being "processed". This results in many non-sequential inserts. Later after processing, more data is inserted into this table by some of our in-house software. The fragmentation for a given table is at least 50%. In the case of the largest table, it is at 90%. We do not have a DBA. I am aware we desperately need a DB maintenance strategy. As far as my background, I'm a college student working part time at this company. My question is this, is a clustered index the best way to go about this? Should another index be considered? Are there any good references for this type and similar ad-hoc DBA tasks?

    Read the article

  • What would be a good Database strategy to manage these two product options?

    - by bemused
    I have a site that allows users to purchase "items" (imagine it as an Advertisement, or a download). There are 2 ways to purchase. Either a subscription, 70 items within 1 month (use them or lose them--at the end of the month your count is 0) or purchase each item individually as you need it. So the user could subscribe and get 70/month or pay for 10 and use them when they want until the 10 are gone. Maybe it's the late hour, but I can't isolate a solution I like and thought some users here would surely have stumbled upon something similar. One I can imagine is webhosts. They sell hosting for monthy fees and sell counts of things like you get 5 free domains with our reseller account. or something like a movie download site, you can subscribe and get 100 movies each month, or pay for a one-time package of 10 movies. so is this a web of tables and where would be a good cross between the product a user has purchased and how many they have left? products productID, productType=subscription, consumable, subscription&consumable subscriptions SubscriptionID, subscriptionStartDate, subscriptionEndDate, consumables consumableID, consumableName UserProducts userID,productID,productType ,consumptionLimit,consumedCount (if subscription check against dates), otherwise just check that consumedCount is < than limit. Usually I can layout my data in a way that I know it will work the way I expect, but this one feels a little questionable to me. Like there is a hidden detail that is going to creep up later. That's why I decided to ask for help if someone in the vast expanse can enlighten me with their wisdom and experience and clue me in to a satisfying strategy. Thank you.

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >