Search Results

Search found 24675 results on 987 pages for 'table'.

Page 697/987 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • Removing Eclipse completely

    - by Abhishek Bhandari
    I had an Eclipse - Galileo working fine . Suddenly it started to hang till death(eclipse crashes) when I tried to open a DB2 table form DbViewer Plugin's Db tree View . I tried many stuffs , replacing DbViewer plugin and other memory stuffs . This happens only with DBViewer. So I unziped another eclipse in another directory . But it opens the same settings,plugins and workspace of the previous eclipse .I removed the previous eclipse even the same problem exists. In simple word. How to remove eclipse completely from Windows 7?

    Read the article

  • Optimizing MySQL -

    - by Josh
    I've been researching how to optimize MySQL a bit, but I still have a few questions. MySQL Primer Results http://pastie.org/private/lzjukl8wacxfjbjhge6vw Based on this, the first problem seems to be that the max_connections limit is too low. I had a similar problem with Apache initially, the max connection limit was set to 100, and the web server would frequently lock up and take an excruciatingly long time to deliver pages. Raising the connection limit to 512 fixed this issue, and I read that raising the connection limit on MySQL to match this was considered good practice. Being that MySQL has actually been "locking up" recently as well (connections have been refused entirely for a few minutes at a time at random intervals) I'm assuming this is the main cause of the issue. However, as far as table cache goes, I'm not sure what I should set this as. I've read that setting this too high can hinder performance further, so should I raise this to right around 551, 560, 600, or do something else? Lastly, as far as raising the join_buffer_size value goes, this doesn't even seem to be included in Debian's my.cnf file by default. Assuming there's not much I can do about adding indexes, should I look into raising this? Any suggested values? Any suggestions in general here would be appreciated as well. Edit: Here's the number of open tables the MySQL server is reporting. I believe this value is related to my question (Opened_tables: 22574)

    Read the article

  • Combo/Input LOV displaying non-reference key value

    - by [email protected]
    Its a very common use-case of LOV that we want to diplay a non key value in the LOV but store the key value in the DB. I had to do the same in a sample application I was building. During implementation of this, I realized that there are multiple ways to achieve this.I am going to describe each of these below.Example : Lets take an example of our classic HR schema. I have 2 tables Employee and Department where Dno is the foreign key attribute in Employee that references Department table.I want to create a LOV for Deparment such that the List always displays Dname instead of Dno. However when I update it, it it should update the reference key Dno.To achieve this I had 3 alternative1) Approach 1 :Create a composite VO and add the attributes from Department into Employee using a join.Refer the blog http://andrejusb.blogspot.com/2009/11/defining-lov-on-reference-attribute-in.htmlPositives :1. Easy to implement and use.2. We can use this attribute directly in queries defined on new attribute i.e If i have to display this inside query panel.Negative : We have to create an additional Join on the VO.Ex:SELECT Employees.EMPLOYEE_ID,        Employees.FIRST_NAME,        Employees.LAST_NAME,        Employees.EMAIL,        Employees.PHONE_NUMBER,       Department.Dno,        Department.DnameFROM EMPLOYEES Employees, Department Department WHERE Employees.Dno = Department .Dno2) Approach 2 :

    Read the article

  • Permanent Routes Centos Questions

    - by user65053
    So with a little help I figured out how to setup these routes and I can set them in rc.local route add -net 208.82.236.0 netmask 255.255.255.0 dev ppp0 metric 1 route add -net 208.82.236.0 netmask 255.255.255.0 dev eth0 metric 10 my question is being that the first route is ppp0 as soon as I disconnect the modem the route is dropped how do I maintain the route or make it permanent so that next time the modem connects it will follow the route. Currently after ppp0 disconnects the route is dropped netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface laxapx03.o1.com * 255.255.255.255 UH 0 0 0 ppp0 208.82.236.0 * 255.255.255.0 U 0 0 0 eth0 10.0.1.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth0 default 10.0.1.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • How to proxy to different named databases on the same server using MySQL Proxy?

    - by cclark
    I would like to have two databases on my MySQL server: DEV_DB_A DEV_DB_B However, in order to keep everyone's scripts, Query Browser settings and anything else from changing when we switch from using on DB to another I'd like to have everyone connect to DEV_DB and then use something like MySQL Proxy running a lua script which knows the currently active DB is DEV_DB_A and routes queries to there. If we restore a fresh version of the DB to DEV_DB_B or make some changes (e.g. partition a table) we can easily switch to DEV_DB_B by changing one Lua script instead of updating references everywhere. I had hoped I might be able to symlink inside of the mysql data directory but that didn't work so it seems like MySQL Proxy is a reasonable approach. Being new to Lua and MySQL Proxy I'm wondering if anyone else has approached the problem this way and how it worked.

    Read the article

  • Printer monitoring script (PowerShell)

    - by HannesFostie
    I am going to write a script of some sort to check event viewer in a windows server 2003 for all printjobs, and then write them to a comma delimited textfile like printername_floor_room.txt I am wondering what the best way is to do this realtime, and keep checking the event viewer constantly. Any caveats I need to be aware of? Thanks EDIT: Okay, so I will most likely go for PowerShell and use Get-EventLog and then edit the "table" data. Problems I'm having: if I were to save all this data to a text file, how do I get the data out of it? A comma-separated file I could work with, but this, I'm not really sure. And once that is sorted out, I'm still not sure how to keep the file updated more or less real-time. Can I make this service-like, without hogging up all resources? Run it every x seconds for example?

    Read the article

  • How can I roll back xserver-xorg-core and xserver-common?

    - by Ville Sundberg
    A recent update to Xorg broke my desktop, which now looks like this: http://i.imgur.com/PbBxh.jpg In short, the desktop background is not updating on the secondary display. (And if there is no secondary display, the primary display background stops updating.) Looking into the history, I found that this happened right after upgrading two packages: xserver-xorg-core xserver-common These were upgraded to 1.9.0-0ubuntu7.3. I'd like to downgrade these packages. How do I do that? I've checked that both have another version in the maverick repo: xserver-xorg-core: Installed: 2:1.9.0-0ubuntu7.3 Candidate: 2:1.9.0-0ubuntu7.3 Version table: *** 2:1.9.0-0ubuntu7.3 0 500 http://fi.archive.ubuntu.com/ubuntu/ maverick-updates/main amd64 Packages 100 /var/lib/dpkg/status 2:1.9.0-0ubuntu7 0 500 http://fi.archive.ubuntu.com/ubuntu/ maverick/main amd64 Packages However, apt won't let me downgrade them: ville@fluxx ~ % sudo apt-get install xserver-common=2:1.9.0-0ubuntu7 xserver-xorg-core=2:1.9.0-0ubuntu7 The following packages have unmet dependencies: xserver-xorg-core : Depends: xserver-xorg but it is not going to be installed E: Broken packages And this is the reason: ville@fluxx ~ % sudo apt-get install xserver-common=2:1.9.0-0ubuntu7 xserver-xorg-core=2:1.9.0-0ubuntu7 xserver-xorg-core The following packages have unmet dependencies: xserver-xorg-core : Depends: xserver-common (>= 2:1.9.0-0ubuntu7.3) but 2:1.9.0-0ubuntu7 is to be installed E: Broken packages Am I out of options here?

    Read the article

  • Best way to transfer files across unstable LAN?

    - by JamesTheAwesomeDude
    This is very similar to Question 326211, but in this case, the LAN is an unstable Wi-Fi connection. I need to transfer about 11 GiB of files between two computers, both running Linux (although one may be rebooted into Windows.) Their connection is both slow and unstable (due to Linux's awful Wi-Fi support,) but removable media (such as a flash drive or external hard drive) is not an option at this time. Right now, I'm slowly transferring the files, one by one, across SFTP, but I have to reconnect each computer approximately every 90 seconds, and the computers are not very close to each other, so this is not feasible. This is not a duplicate of Question 30186; that one specifically concerns Windows 7, and all the proposed solutions involve closed-source, Windows-only programs (which are all spyware IMHO, and are all off the table even if I trusted them - one of the computers is Linux-only.)

    Read the article

  • unknown module in my server to get PHP errors in HTML tables

    - by Javier Novoa C.
    Sorry to ask this... I manage Apache and PHP in my computer. But having installed a lot of things, I've lost track of some of them. (Things I find really useful to have at my job, or to restore in case of emergency). The problem is that I have installed this thing which displays PHP errors in a nice and colored html table, but can't remember what I have installed or configured to get it work like it. Can you give me a hint about it? I'm using Debian Lenny, Apache 2.2 and PHP 5.2 Here's a screenshot: Thank you very much for reading. Javier

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • Recover Time Machine partition that turned MBR only instead of GUID

    - by alex
    I have one drive that has a NTFS partition, a TimeMachine partition (I guess HFS+) and empty space. The other day, I did one partition more from Windows 8 (bootcamp) and since then, I can't see the TimeMachine one from OSX, I can see it from Windows though. The problem is that TimeMachine uses a file system that Windows cannot browse, only shows some folders and I need to recover this partition because I have to use it to backup my Mac. On OSX I can only see the NTFS partition and the other one appears unmounted and it's impossible to mount. I've come to the conclusion that something has happened to the partition table. With TestDisk it shows that it's MBR only when I think it should be GUID: And pressing p shows that it's FDisk_partition_scheme and the TimeMachine one appears as Windows_NTFS. I found this thread that is similar to what it's happening to me: Adding NTFS partition to disk in Windows makes HFS+ partition on same disk invisible in Mac OS X

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • CNet router - no field for private port

    - by Aadit M Shah
    I'm trying to configure port forwarding on my CNet router for a locally hosted HTTP server. The model number of my router is CQR-981 and the firmware version is 1.0.43. The problem is that there's no field to enter the private port of the HTTP server (the local port). According to the manual there should be one. Here's a picture of the manual: Here's a screenshot of my router page for port forwarding (with no field for private port): Is there some way I can circumvent this problem. Perhaps manually make an HTTP request to the HTTP server on the router to update the table with the private port number, or perhaps update my firmware to solve this problem.

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • Trace Mobile Service Serving 20,000 + Request Per Month

    - by Gopinath
    We introduced Trace Mobile Service in April 2010 and we are glad to announce that now the service is processing 20000 + per month. After a long time today I looked at the statistics and overwhelmed to see the number of trace requests processing by the service as 24282, 23781 and 18475 in the months of January 11, December 10 and November 10 respectively. Also I’m glad to announce that this service is contributes close to 10% of our revenues. Here is a table that provide stats for the past 7 months For those who don’t know about this service It is a tiny, yet very useful service for tracing information of Indian mobile phones. Usage of this service is very simple: enter any Indian mobile phone number and it will instantaneously let you know the location and the service provider of the mobile phone. Visit Trace Mobile Service or read Introducing “Trace Mobile Information” Service for more details This article titled,Trace Mobile Service Serving 20,000 + Request Per Month, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

  • Weird routing problems with VPN

    - by Borek
    In our VPN setup I have to add a route to my routing table like this: route add 1.2.3.0 mask 255.255.255.0 172.16.1.1 -p Our internal addresses 1.2.3.x then use 172.16.1.1 as their gateway and both my local internet and work VPN can work at the same time. However, when I disconnect from VPN and reconnect again, I can't ping our servers even though the connection status is "Connected". When I do route print my previously added route is listed but it doesn't seem to work. So I try to execute that 'route add' command again and as expected, it tells me that The route addition failed: The object already exists. But - and that's the point - when I now try to ping our servers again, everything works! So every time, I have to execute this route add command that will fail but fix the issue at the same time. Any ideas what I might be doing wrong? My PC is Windows 7 x64, I am Administrator, UAC is enabled and the command prompt is run with elevated privileges.

    Read the article

  • Register Now! Oracle 'In Touch' PartnerCast: Be prepared for a year of growth

    - by Julien Haye
    Dear Oracle partners, We would like to invite you to join David Callaghan, Senior Vice President Oracle EMEA Alliances and Channels, and his studio guests for the next broadcast of the ‘In Touch’ PartnerCast on Tuesday 1st July 2014 from 10:30am UK/ 11:30 CET. In this cast, David’s studio guests and his regional reporters will be looking at your priorities as EMEA partners and how best to grow with Oracle. We also look forward to the the broadcast covering the following hot topics: Highlights of FY14 Strategic themes for FY15 SaaS - HCM, CRM, ERP Oracle on Oracle Exclusive for ‘In Touch’ David Callaghan questions Rich Geraffo, Senior Vice President, Global Alliances & Channels, on how the FY15 Global partner kick off relates to EMEA. Plus David provides your chance to hear from some of the newly appointed Oracle Worldwide A&C Leadership team as he discusses with Bruce Chumley VP Oracle Channel Distribution Sales & Troy Richardson VP Oracle Strategic Alliances; their core focus and strategy of growth and what they intend on bringing to the table in their new role. You can now register for the cast here: With lots of studio guests joining David, why not get in touch on Twitter using the hashtag #OracleInTouch or by emailing [email protected] to get your questions featured in the cast! To find out more information and to watch previous episodes on-demand, please visit our webpage here. Best regards, Oracle EMEA Alliances & Channels

    Read the article

  • Refresh devices - reconnect CF card drive by script (unplug-plug equivalent)

    - by Chris
    Hello I plug a completely clean CF-card into my USB card-writer. Then I dd a mbr block of 512 bytes size to the device, which contains the partition table and the definition of one partition. Problem: While "fdisk -l /dev/sdx" correctly displays the partition, it happens that there is no device like "/dev/sdx1" after these operations (as it was not present before). Unplugging and plugging the card-writer solves the problem and makes the device(s) appear. Since I use this procedure in a script, manually unplugging and re-plugging is no option whatsoever. Is there a way to "refresh" the devices or to "unplug and re-plug" the drive by script such that /dev/sdx1 appears? Thanks for any help, Chris

    Read the article

  • E-Business Suite Proactive Support - Workflow Analyzer

    - by Alejandro Sosa
    Overview The Workflow Analyzer is a standalone, easy to run tool created to read, validate and troubleshoot Workflow components configuration as well as runtime. It identifies areas where potential problems may arise and based on set of best practices suggests the Workflow System Administrator what to do when such potential problems are found. This tool represents a proactive way to verify Workflow configuration and runtime data to prevent issues ahead of time before they may become of more considerable impact on a production environment. Installation Since it is standalone there are no pre-requisites and runs on Oracle E-Business applications from 11.5.10 onwards. It is installed in the back-end server and can be run directly from SQL*Plus. The output of this tool is written in a HTML file friendly formatted containing the following on both workflow Components configuration and Workflow Runtime data: Workflow-related database initialization parameters Relevant Oracle E-Business profile option values Workflow-owned concurrent programs schedule and Workflow components status Workflow notification mailer configuration and throughput via related queues and table Workflow-relevant recommended and critical one-off patches as well as current code level Workflow database footprint by reading Workflow run-time tables to identify aged processes not being purged. It also checks for large open and closed processes or unhealthy looping conditions in a workflow process, among other checks. See a sample of Workflow Analyzer's output here.  Besides performing the validations listed above, the Workflow Analyzer provides clarification on the issues it finds and refers the reader to specific Oracle MOS documents to address the findings or explains the condition for the reader to take proper action. How to get it? The Workflow Analyzer can be obtained from Oracle MOS Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance (Doc ID 1369938.1) and the supplemental note How to run EBS Workflow Analyzer Tool as a Concurrent Request (Doc ID 1425053.1) explains how to register and run this tool as a concurrent program. This way the report from the Workflow Analyzer can be submitted from the Application and its output can be seen from the application as well.

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >