Search Results

Search found 23949 results on 958 pages for 'test me'.

Page 564/958 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Too many heap subpools might break the upgrade

    - by Mike Dietrich
    Recently one of our new upcoming Oracle Database 11.2 reference customers did upgrade their production database - a huge EBS system - from Oracle 9.2.0.8 to Oracle Database 11.2.0.2. They've tested very well, we've optimized the upgrade process, the recompilation timings etc.  But once the live upgrade was done it did fail in the JAVA component piece with this error: begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then * ORA-29553: classw in use: SYS.javax/mail/folder ORA-06512: at "SYS.INITJVMAUX", line 23 ORA-06512: at line 5 Support diagnosis was pretty quick - and refered to:Bug 10165223 - ORA-29553: class in use: sys.javax/mail/folder during database upgrade But how could this happen? Actually I don't know as we have used the same init.ora setup on test and production. The only difference: the prod system has more CPUs and RAM. Anyway, the bug names as workarounds to either decrease the SGA to less than 1GB or decrease the number of heap subpools to 1. Finally this query did help to diagnose the number of heap subpools: select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1; The result was 2 - so we did run the upgrade now with this parameter set: _kghdsidx_count=1 And finally it did work well. One sad thing:After the upgrade did fail Support did recommend to restore the whole database - which took an additional 3-4 hours. As the ORACLE SERVER component has been already upgraded successfully at the stage where the error did happen it would have been fine to go on with the manual upgrade and start catupgrd.sql script. It would have been detected that the ORACLE SERVER is upgraded already and just picked up the non-upgraded components. The good news:Finally I had one extra slide to add to our workshop presentation

    Read the article

  • Is it possible to tunnel ICMP over TCP?

    - by Robert Atkins
    I don't want to tunnel TCP over ICMP (as ptunnel does), I want to go the other way around. I'm in the situation where I have TCP (HTTP) connectivity to a machine but an internal firewall over which I have no control is swallowing pings. The monitoring software I'm using appears to determine connectivity by attempting to send a ping before it tries to just connect to the web service on the target machine. It's failing this ping test and giving up. I believe if I could fool my monitoring software into thinking pings were getting through, it would then connect to the web service and be on its merry way. Anyone know how I can do this? I have SSH and root access on the destination machine.

    Read the article

  • Are these company terms good for a programmer or should I move?

    - by o_O
    Here are some of the terms and conditions set forward by my employer. Does these make sense for a job like programming? No freelancing in any way even in your free time outside company work hours (may be okay. May be they wanted their employees to be fully concentrating on their full time job. Also they don't want their employees to do similar work for a competing client. Completely rational in that sense). - So sort of agreed. Any thing you develop like ideas, design, code etc while I'm employed there, makes them the owner of that. Seriously? Don't you think that its bad (for me)? If I'm to develop something in my free time (by cutting down sleep and hard working), outside the company time and resource, is that claim rational? I heard that Steve Wozniak had such a contract while he was working at HP. But that sort of hardware design and also those companies pay well, when compared to the peanuts I get. No other kind of works allowed. Means no open source stuffs. Fully dedicated to being a puppet for the employer, though the working environment is sort of okay. According to my assessment this place would score a 10/12 in Joel's test. So are these terms okay especially considering the fact that I'm underpaid with peanuts?

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • openldap proxied authorization

    - by bemace
    I'm having some trouble doing updates with proxied authorization (searches seem to work fine). I'm using UnboundID's LDAP SDK to connect to OpenLDAP, and sending a ProxiedAuthorizationV2RequestControl for dn: uid=me,dc=People,dc=example,dc=com with the update. I've tested and verified that the target user has permission to perform the operation, but I get insufficient access rights when I try to do it via proxy auth. I've configured olcAuthzPolicy=both in cn=config and authzTo={0}ldap:///dc=people,dc=example,dc=com??subordinate?(objectClass=inetOrgPerson) on the original user. The authzTo seems to be working; when I change it I get not authorized to assume identity when I try the update (also for searches). Can anyone suggest what else I should look at or how I could get more detailed errors from OpenLDAP? Anything else I can test to narrow down the source of the problem?

    Read the article

  • What settings to use for fastest reloading of a MySQL backup?

    - by Alex R
    I have a MySQL database which dumps to a 3.5 GB backup (mysqldump) in about 10 minutes. But reloading this backup on a standby / test server takes upwards of 12 hours. What are some settings that would maximize reloading performance? The most promising appear to be innodb_buffer_pool_size, innodb_additional_mem_pool_size, and innodb_log_buffer_size... but I'm reaching the limits of my trial-and-error approach. Which of these settings "should" be the most important? Through trial-and-error I was not able to get more than 70% CPU utilization and 63% memory utilization. I'd like both at 100% during a reload. All tables are InnoDB.

    Read the article

  • VLC RTP Streaming in FC12

    - by Matt D
    I'm trying to get VLC to work streaming RTP audio/video over my office network. The goal is multicast a/v streaming. In all test cases, we are streaming from VLC to VLC. I am able to stream from Windows to Windows, and from Fedora to Windows, but not from Windows to Fedora. Additionally, I am unable to receive a LOCAL stream from one instance of VLC to another, within Fedora. I don't see any reason why this would be. The buffer indicator (where the elapsed/total time is normally displayed) never shows any connectivity, so it would appear to be a network problem, but since I am able to stream from Fedora to Windows (same IP, same port) I thought it would be something else. Does anyone know of a solution to this issue?

    Read the article

  • SQL Server backup/restore error: The Media Family on Device is Incorrectly Formed.

    - by Chris
    Basically, I'm having this issue: http://www.sqlcoffee.com/Troubleshooting047.htm What I'm doing is running a script I found online (http://pastebin.com/3n0ZfybL) to do a full backup, then rar'ing up the file and moving it to my computer. The CRC of the backup file inside the rar is correct on both computers, so there is no problem with data being corrupted when I transfer it. But then I go and try to restore the database on my dev computer here and I get the errors "sql server cannot process this media family" ... "msg 3013". Why is this happening? I'd test out the backup on the server I'm getting it from, but it's a production server.

    Read the article

  • How to correctly set up iWARP? Preferably on loopback

    - by ajdecon
    iWARP is a protocol for doing remote direct memory access (RDMA) on top of TCP/IP, so that it can work with Ethernet and other network types as opposed to Infiniband. It works with many of the standard IB interfaces - the IB verbs, for example - so it's all pretty transparent. I'm doing some IB-verbs programming (mostly for the sake of learning about how they work better), and it'd be wonderfully convenient for me if I could use iWARP to do RDMA over my loopback interface, so that I could test some of my code without getting on our IB-connected cluster. :-) But I cannot figure out how to get a "local development environment" set up: there are no tutorials I'm aware of for even setting up iWARP from scratch on a server or a network interface. Can anyone give me a tutorial or point me in the right direction? Environment is Fedora 16 running in VirtualBox.

    Read the article

  • why udp client work when wirshark capture?

    - by herzl shemuelian
    I have two machine A,B windows 7 os .I connect them end to end and try run a performance test by using tcpreplay. step 1) I check conectivity between to point by netcat In A i run nc -lvup 5432 when I run on B nc -u 1.2.3.4 5432 I can send data from B to A step 2) when in I run tcpreplay in B tcpreplay -i %0 myudp.pcap in A I don't recevice any data . when I open wireshark in A then my nc can read data why? I check dst mac and dst ip in pcap file they are correct. is importan udp src mac or src ip for udp how that I open udp server ?

    Read the article

  • Generating the escape sequences for terminal by ahk

    - by obeliksz
    I am trying to make an ahk for putty to send the keycodes that I want for some key combinations in order for my program to work over terminal too. For this I have an ahk already with some key combinations working properly by experimenting awfully much time from here and there, from key tables, etc but I still don't get, did not came up with a clear, logical method to calculate the escape key that I want. An example: ^F1::SendInput ^[O5P It gives 28 in my test prog. I see that for ^[1 I get 377 and for ^[2 376... and I see that there can be used letters out of the hexa numbers (A-F) and also ; and ~ or double [[ Do you understand how this works? Any good descriptive material for this? Thanks a lot!

    Read the article

  • How do i enable deflate on apache2? debian

    - by acidzombie24
    I looked at the other answers and those solutions did not help. I am completely confused how to do this. I know almost nothing about apache nor how to use it and nearly all the article say write this but doesnt tell me where. So far i have done this # a2enmod deflate Module deflate already enabled Then i tried writing this in httpd.conf AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css Then i wrote a longer version which involved deflate.c. After those didnt work i deleted them all and wrote the AddOutputFilterByType line in apache2/sites-enabled/000-default inside of I used this to test http://www.whatsmyip.org/http_compression/ everytime it says no compression. I used another site but i could never compress. Also i restart the server everytime i save a file so what could be the problem and how do i enable this properly?

    Read the article

  • Why didn't cable select work?

    - by jldugger
    I got roped into doing tech support for a friend of the family. Obviously I'd already failed to hide my powers, ala Penny Arcade. Anyways, the guy bought a DVD burner OEM from Microcenter, and asked me to install it. So I stopped by before and thought I'd be slick and use Cable Select on the jumpers. I didn't get a chance to test it before it I had to leave, and it seems that this didn't work. I came back this week to investigate, and he explains he's confused how none of the software he downloaded was able to burn. So on a whim I switch it to explicit master / slave, and it starts working fine. Whoops. Well, at least it's not the extra crap he found and downloaded for free from the internet. Why doesn't setting both jumpers to Cable Select solve this?

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • How to tell if OpenGL is really working in Ubuntu 10.04

    - by Jonathan
    I have a lenovo S9e running Intel integrated graphics. Here is my lspci output related to the graphics: 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: Lenovo Device 3870 Flags: bus master, fast devsel, latency 0 Memory at f0580000 (32-bit, non-prefetchable) [size=512K] Capabilities: [d0] Power Management version 2 I want to know how I can make sure OpenGL support is running in full on an Ubuntu 10.04 installation. I have a few hints to think that it is not: The "Desktop Effects" will not load Apps such as stardock, when attempting to use OpenGL rendering, will display black boxes instead of transparency In the games Pioneers, the number-tile icons are suspiciously just black circles Windows games running with Wine will only support software rendering, not hardware rendering When I boot into a Knoppix LiveCD, the desktop effects do work, splendidly, meaning compiz detects my computer as capable. My problem with troubleshooting is that Canonical has basically eliminated the conf-file-based mechanism of X11 as far as I can tell, thus making it even harder to ensure graphics modules are loading properly. How do I debug and test OpenGL on m Ubuntu 10.04 installation?

    Read the article

  • nginx serves broken characters (nginx on linux as guest system in vbox)

    - by Andrew123321
    I have nginx 1.2.0-1 on debian 6.0.5. I have file test.css. I fill it with "abcd1234". Open it in browser. Then I change the content to "mnop". I receive "abcd" in response. I have all the files in folder shared between Windows (host) and Debian (guest) using Virtual Box. When I put the file elsewhere the problem does not occur! Any idea what can cause this? Thank you (I've been editing question as I was discovering the problem)

    Read the article

  • EMC CX3-10c Fault Condition won;t clear

    - by ITGuy24
    We have an old CX3-10c from Dell that had both Standby Power Supplies (SPS) fail. This obviously caused a fault on the system and disabled the cache. We have replaced the SPS's and they test fine as do all other components. Problem is there is still a fault on the "Enclosure SPE [SPE3]" despite all the component in the enclosure showing as good. I was on the line with Dell Gold support for 3 hours yesterday, they have had me restart the SPs multiple times, as well as reseat the power supplies, even shutdown the system completely and power it back on. All to no avail. Fault remains and cache cannot be re-enabled so long as the Fault is present. Any suggestions on clearing this erroneous fault?

    Read the article

  • Using e-mail address as user name for SMTP and POP3

    - by PeterMmm
    I have a exim4 setup as SMTP. My user naming schema is to name all mail users for this server as m001, m002, m003, ... and then redirect to a real e-mail address with virtual domains. How can I allow my users to authenticate with exim to send mail using either their system user name (m001) or the email address ([email protected])? User login information for m001 are stored in linux system files (passwd, shadow). They are linked thru entries in a virtual address table for each domain that this server can serve: # /etc/exim4/virtual/example.com m001: [email protected] m002: [email protected] m003: [email protected] The same can be applied to qpopper ?

    Read the article

  • can't execute scripts compiled with shc

    - by serilain
    I'm trying to use SHC to compile a shell script so that I can set the SUID bit on it and obfuscate what it's doing (I'm attempting to have it run as part of all new users' .bashrc). As a test, I wrote a script that's simply: #!/bin/bash env And compiled it using shc -r -f script.sh However, when I try to run the resulting script by simply doing ./script.sh.x, even after setting it to 777 (just for testing purposes), I get "Operation not permitted; killed" unless I run it as sudo (which I don't want to have to do). Am I running afoul of some Ubuntu permissions that won't let me run binaries created by shc? Thanks!

    Read the article

  • Is it my motherboard?

    - by jarodrussell
    Early this week my home server (a Linux machine) threw a kernel panic. Yesterday is happened a couple of times. Then all of a sudden, when I plugged a USB stick in to run a memory test, the monitor stopped coming on. Now whenever I turn it on, the system gets power...I heard the drives spin, I see the processor fan spin, and the hard drive light comes on...but nothing happens. I put a video card in the AGP slot, but still nothing. The light on the power button that usually comes on stopped coming on. I took the memory out to see if it would beep, but nothing beeped. It's like it's getting power, but it's not coming on. Does this sound like a motherboard problem to anyone else? I think it is, but a second opinion would be greatly appreciated. Thanks.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-12

    - by Bob Rhubart
    2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration event.on24.com Date: Tuesday, April 24, 2012 Time: 10:00 AM PT / 1:00 PM ET Speakers: Mala Narasimharajan - Senior Product Marketing Manager, Oracle Data Integration, Oracle Bruce Tierney - Director of Product Marketing, Oracle SOA Suite, Oracle The Increasing Focus on Architecture (ArchBeat) blogs.oracle.com As a "third wave" of computing, Cloud computing is changing how IT organizations and individuals within those organizations approach the creation of solutions. Updated SOA Documents now available in ITSO Reference Library blogs.oracle.com Nine updated documents have just been added to the IT Strategies from Oracle library, including SOA Practitioner Guides, SOA Reference Architectures, and SOA White Papers and Data Sheets. Access to all documents within the ITSO library is free to those with a free Oracle.com membership. WebLogic JMS Clustering and Spring | Rene van Wijk middlewaremagic.com Oracle ACE Rene van Wijk sets up a WebLogic cluster that includes a JMS environment, which will be used by Spring. Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri blogs.oracle.com Shub Lahiri shows how the pre-installed simulator that comes with the SOA Suite for Healthcare Integration pack can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. In the cloud era, let's start calling IT what it is: 'Innovation Team' | Joe McKendrick www.zdnet.com Cloud, the third great shift in 50 years of computing, presents a golden opportunity for IT to get out in front and lead. Thought for the Day "Why do we never have time to do it right, but always have time to do it over?" — Anonymous

    Read the article

  • Windows 2003 SMTP virtual server, why emails are not delivered?

    - by bardan
    Configured Windows 2003 as my email client, everything works fine with POP3 (i'm able to recieve emails), the problem is with SMTP and i can't figure out how to find where excatly this problem is, because email looks like it is sent, but recipients don't recieve anything... i had some problems with relying, but fixed everything, and now i configured outlook express on the same machine, trying to send emails and it looks everything fine, email goes to SENT folder, no errors, but recipients (tried several diffrent) don't revieve any letters... tried to test from the same machine with telnet like it described there http://support.microsoft.com/kb/153119 ant everything looks ok...

    Read the article

  • Need to re-build an application - how?

    - by Tom
    For our main system, we have a small monitor application that sits outside our network and periodically tries to log in to verify the system still works. We have a problem with the monitor though in that the communications component set (Asta 3 inside Delphi applications) doesn't always connect through. Overall, I'd say it's about 95% reliable, but that other 5% kills the monitor since it will try to log in and hang on the connection attempt (no timeout in the component). This really isn't an issue on the client side of the system since the clients don't disconnect and reconnect repeatedly on the same application instance, but I need a way to make sure the monitor stays up and continues working even when the component fails on a run. I have a few ideas as to which way to have the program run, the main idea being to put the communications inside a threaded data module so that if one thread crashes then another thread can test later and the program keep going. Does this sound like a valid way to go? Any other ideas how to ensure a reliable monitoring application with a less than 100% reliable component? Thanks. P.S. Not sure these tags are the most appropriate. Tried including "system-reliability" as one, but not high enough rep to create.

    Read the article

  • Configuring Samba to allow Use of CUPS printer

    - by Skizz
    Having trouble with samba printing. I have a CUPS printer installed on an Ubuntu 11.04 server and that works great. When I try to configure samba to allow an XP machine to use the printer, it fails when printing. I can install the printer drivers for XP from the server and the printer appears in the XP printer control panels. When I try to print a test page from the XP machine I get this error in the system event log: Jun 27 20:33:29 FatController smbd[3571]: [2012/06/27 20:33:29, 0] rpc_server/srv_netlog_nt.c:603(_netr_ServerAuthenticate3) Jun 27 20:33:29 FatController smbd[3571]: _netr_ServerAuthenticate3: netlogon_creds_server_check failed. Rejecting auth request from client JAMES machine account JAMES$ Here's my smb.conf file: [global] server string = %h (Server) workgroup = SODOR encrypt passwords = true security = user os level = 255 preferred master = yes domain master = yes local master = yes logon path = \\%L\profile\%U logon drive = S: logon home = \\%L\home\%U domain logons = yes map to guest = Never guest ok = no dns proxy = no time server = yes logon script = logon.bat load printers = yes printing = cups printcap name = cups nt acl support = no interfaces = eth1 lo bind interfaces only = yes smb ports = 445 [netlogon] comment = Net Log On path = /home/samba/netlogon guest ok = no read only = yes browseable = no [profile] comment = User Profiles path = /home/samba/profiles read only = no create mask = 0600 directory mask = 0700 browseable = no store dos attributes = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes guest ok = no printable = yes [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes guest ok = no read only = yes write list = root, skizz Anyone know what the problem is and how to fix it? In addition to the above, I also get this error: Jun 27 21:56:35 FatController smbd[3571]: [2012/06/27 21:56:35, 0] printing/print_cups.c:1027(cups_job_submit) Jun 27 21:56:35 FatController smbd[3571]: Unable to print file to `Edward' - client-error-not-authorized which I think is more relevant.

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >