Search Results

Search found 2787 results on 112 pages for 'triple channel'.

Page 12/112 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • workaround for ORA-03113: end-of-file on communication channel

    - by Jefferstone
    The call to TEST_FUNCTION below fails with "ORA-03113: end-of-file on communication channel". A workaround is presented in TEST_FUNCTION2. I boiled down the code as my actual function is far more complex. Tested on Oracle 11G. Anyone have any idea why the first function fails? CREATE OR REPLACE TYPE "EMPLOYEE" AS OBJECT ( employee_id NUMBER(38), hire_date DATE ); CREATE OR REPLACE TYPE "EMPLOYEE_TABLE" AS TABLE OF EMPLOYEE; CREATE OR REPLACE FUNCTION TEST_FUNCTION RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; SELECT CAST(MULTISET ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION; CREATE OR REPLACE FUNCTION TEST_FUNCTION2 RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; WITH combined AS ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) SELECT CAST(MULTISET ( SELECT * FROM combined ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION2; SELECT * FROM TABLE (TEST_FUNCTION()); -- Throws exception ORA-03113. SELECT * FROM TABLE (TEST_FUNCTION2()); -- Works

    Read the article

  • SSH X11 not working

    - by azat
    I have a home and work computer, the home computer has a static IP address. If I ssh from my work computer to my home computer, the ssh connection works but X11 applications are not displayed. In my /etc/ssh/sshd_config at home: X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes At work I have tried the following commands: xhost + home HOME_IP ssh -X home ssh -X HOME_IP ssh -Y home ssh -Y HOME_IP My /etc/ssh/ssh_config at work: Host * ForwardX11 yes ForwardX11Trusted yes My ~/.ssh/config at work: Host home HostName HOME_IP User azat PreferredAuthentications password ForwardX11 yes My ~/.Xauthority at work: -rw------- 1 azat azat 269 Jun 7 11:25 .Xauthority My ~/.Xauthority at home: -rw------- 1 azat azat 246 Jun 7 19:03 .Xauthority But it doesn't work After I make an ssh connection to home: $ echo $DISPLAY localhost:10.0 $ kate X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. kate: cannot connect to X server localhost:10.0 I use iptables at home, but I've allowed port 22. According to what I've read that's all I need. UPD. With -vvv ... debug2: callback start debug2: x11_get_proto: /usr/bin/xauth list :0 2/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 1: request x11-req confirm 1 debug2: client_session2_setup: id 1 debug2: fd 3 setting TCP_NODELAY debug2: channel 1: request pty-req confirm 1 ... When try to launch kate: debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 55486 debug2: fd 8 setting O_NONBLOCK debug3: fd 8 is O_NONBLOCK debug1: channel 2: new [x11] debug1: confirm x11 debug2: X11 connection uses different authentication protocol. X11 connection rejected because of wrong authentication. debug2: X11 rejected 2 i0/o0 debug2: channel 2: read failed debug2: channel 2: close_read debug2: channel 2: input open - drain debug2: channel 2: ibuf empty debug2: channel 2: send eof debug2: channel 2: input drain - closed debug2: channel 2: write failed debug2: channel 2: close_write debug2: channel 2: output open - closed debug2: X11 closed 2 i3/o3 debug2: channel 2: send close debug2: channel 2: rcvd close debug2: channel 2: is dead debug2: channel 2: garbage collecting debug1: channel 2: free: x11, nchannels 3 debug3: channel 2: status: The following connections are open: #1 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1) #2 x11 (t7 r2 i3/0 o3/0 fd 8/8 cc -1) # The same as above repeate about 7 times kate: cannot connect to X server localhost:10.0 UPD2 Please provide your Linux distribution & version number. Are you using a default GNOME or KDE environment for X or something else you customized yourself? azat:~$ kded4 -version Qt: 4.7.4 KDE Development Platform: 4.6.5 (4.6.5) KDE Daemon: $Id$ Are you invoking ssh directly on a command line from a terminal window? What terminal are you using? xterm, gnome-terminal, or? How did you start the terminal running in the X environment? From a menu? Hotkey? or ? From terminal emulator `yakuake` Manualy press `Ctrl + N` and write commands Can you run xeyes from the same terminal window where the ssh -X fails? `xeyes` - is not installed But `kate` or another kde app is running Are you invoking the ssh command as the same user that you're logged into the X session as? From the same user UPD3 I also download ssh sources, and using debug2() write why it's report that version is different It see some cookies, and one of them is empty, another is MIT-MAGIC-COOKIE-1

    Read the article

  • RHN Satellite / Spacewalk custom channels, best practice?

    - by tore-
    Hi, I'm currently setting up RHN Satellite, and all works well. I'm in the process of creating custom channels, since we have certain software which should be available for all nodes of satellite, e.g. puppet, facter, subversion, php (newer version than present in base). I've tried to find documentation on best practices on this. How should they be set up, how to handle different arch, how to handle noarch packages. How to sync updates to dependencies when updating a custom package in a custom channel (e.g. php is updated, how to fetch all updated dependencies). The channel management documentation from RHEL (http://www.redhat.com/docs/en-US/Red_Hat_Network_Satellite/5.3/Channel_Management_Guide/html/Channel_Management_Guide-Custom_Channel_and_Package_Management.html) doesn't provide me with enough information on how to solve any of theese issues. All tips, tricks and information regarding this would be great!

    Read the article

  • Motherboard RAM question.

    - by winterwindz
    Firstly I'm new at this. Not a very computer-ish person. So I very much apologize if this is not the right place, but here it goes.. My Motherboard is an Asus P5LD2-SE. I'm running in Win 7 Ultimate 32-bit(x86) OS, 1GB RAM (2x512 MB). I'm planning to upgrade my OS to 64-bit and because I know my Motherboard is a dual-channel, I bought a dual-channel 2 GB RAM (2 pcs). My question is; am I still able use my old RAM ones, since it is 4 slots..? Which in the end will show 5GB. Is it possible? Thank you for your time. =)

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • WCF service using duplex channel in different domains

    - by ds1
    I have a WCF service and a Windows client. They communicate via a Duplex WCF channel which when I run from within a single network domain runs fine, but when I put the server on a separate network domain I get the following message in the WCF server trace... The message with to 'net.tcp://abc:8731/ActiveAreaService/mex/mex' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. So, it looks like the communication just work in one direction (from client to server) if the components are in two separate domains. The Network domains are fully trusted, so I'm a little confused as to what else could cause this? Server app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="JobController.ActiveAreaBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="JobController.ActiveAreaBehavior" name="JobController.ActiveAreaServer"> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://SERVER:8731/ActiveAreaService/" /> </baseAddresses> </host> </service> </services> </system.serviceModel> </configuration> but I also add an end point programmatically in Visual C++ host = gcnew ServiceHost(ActiveAreaServer::typeid); NetTcpBinding^ binding = gcnew NetTcpBinding(); binding->MaxBufferSize = Int32::MaxValue; binding->MaxReceivedMessageSize = Int32::MaxValue; binding->ReceiveTimeout = TimeSpan::MaxValue; binding->Security->Mode = SecurityMode::Transport; binding->Security->Transport->ClientCredentialType = TcpClientCredentialType::Windows; ServiceEndpoint^ ep = host->AddServiceEndpoint(IActiveAreaServer::typeid, binding, String::Empty); // Use the base address Client app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IActiveAreaServer" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://SERVER:8731/ActiveAreaService/" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IActiveAreaServer" contract="ActiveArea.IActiveAreaServer" name="NetTcpBinding_IActiveAreaServer"> <identity> <userPrincipalName value="[email protected]" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> Any help is appreciated! Cheers

    Read the article

  • "Could not establish secure channel for SSL/TLS" in .NET CF application on smart phone

    - by Stefan Mohr
    I have a stubborn communications issue with an application running on the .NET Compact Framework 3.5 on Windows Mobile smartphones. I am constructing a web request using this code: UTF8Encoding encoding = new System.Text.UTF8Encoding(); byte[] Data = encoding.GetBytes(HttpUtility.ConstructQueryString(parameters)); httpRequest = WebRequest.Create((domain)) as HttpWebRequest; httpRequest.Timeout = 10000000; httpRequest.ReadWriteTimeout = 10000000; httpRequest.Credentials = CredentialCache.DefaultCredentials; httpRequest.Method = "POST"; httpRequest.ContentType = "application/x-www-form-urlencoded"; httpRequest.ContentLength = Data.Length; Stream SendReq = httpRequest.GetRequestStream(); SendReq.Write(Data, 0, Data.Length); SendReq.Close(); HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); return httpResponse.GetResponseStream(); The web service functions by receiving a JSON-encoded document as part of the URL (eg. https://site.com/ws/sync??document={"version":"1.0.0","items":[{"item_1":"item1"}]}&user=usr&password=pw), and as a response receives another JSON document as response data. This code runs fine on all emulators and PDAs running WM 5 and 6. We have seen an issue with a couple of customers running Treo smartphones (and only on the Sprint network). We have tested the code on an identical device on the AT&T network (via DeviceAnywhere) and once again the code worked as we expected. This has to be some sort of security policy on the phone, but we've been unable to determine a workaround or diagnose it thoroughly as we cannot reproduce it in house and have had to resort to getting users to assist with running test drivers for us. When this code executes, the user's device throws the following exception: System.Net.WebException Could not establish secure channel for SSL/TLS Stack trace: at System.Net.HttpWebRequest.finishGetRequestStream() at System.Net.HttpWebRequest.GetRequestStream() at OurApp.GetResponseStream(String domain, Hashtable parameters) inner exception: System.IO.IOException Authentication failed because the remote party has closed the transport stream. Stack trace: at System.Net.SslConnectionState.ClientSideHandshake() at System.Net.SslConnectionState.PerformClientHandShake() at System.Net.Connection.connect(Object ignored) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() Examining the server Apache logs shows no hits from the user's IP - I don't think the device is even attempting to send a packet before failing. If relevant, the server is running Apache on Linux and is written using the TurboGears Python framework. The server certificate is issued by a CA and is still valid. The test driver where this error was copied from was not code signed, however the same error (without the error messages) is signed with a GeoTrust certificate so we don't believe this is a code signing issue. The application installs and launches without issue on all phones - it's just establishing this SSL connection that is breaking for these users. One significant issue in troubleshooting is that there is a substantial inconvenience each time we try out a solution (need to find a "volunteer" customer), so we're really looking for a silver bullet or a better understanding of the handshaking process so we can be reasonably confident we only need to ask the user to test it one or two more times. One final mention: we have tried the sync both over ActiveSync and also over GPRS with identical results. Any thoughts would be greatly appreciated!

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • Getting NoClassdef on HMAC_SHA1 in Webpshere

    - by defjab
    We have WAS 6.0 (I know) .2.43 ND running in multiple regions. Our Dev-B region runs fine, but Dev-C throws a java exception when we make web-calls (at least this is what the developer tells me)...Same code in both regions and I checked the obvious suspects (Global security, SSL ciphers etc) and they all seem to match. Here's the stack trace from SystemErr: [8/1/12 4:02:31:758 EDT] 0000005c ServletWrappe E SRVE0068E: Could not invoke the service() method on servlet action. Exception thrown : java.lang.NoClassDefFoundError at javax.crypto.Mac.getInstance(DashoA12275) at net.oauth.signature.HMAC_SHA1.computeSignature(HMAC_SHA1.java:73) at net.oauth.signature.HMAC_SHA1.getSignature(HMAC_SHA1.java:39) at net.oauth.signature.OAuthSignatureMethod.getSignature(OAuthSignatureMethod.java:83) at net.oauth.signature.OAuthSignatureMethod.sign(OAuthSignatureMethod.java:54) at com.harcourt.hsp.utils.LTIUtil.generateSignature(LTIUtil.java:62) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.generateSignature(BaseLTIAction.java:238) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.execute(BaseLTIAction.java:96) at org.springframework.web.struts.DelegatingActionProxy.execute(DelegatingActionProxy.java:106) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:419) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1796) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:887) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:90) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1937) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:130) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:434) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:373) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:253) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminaters(NewConnectionInitialReadCallback.java:207) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:109) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1498) at javax.crypto.Mac.getInstance(DashoA12275) at net.oauth.signature.HMAC_SHA1.computeSignature(HMAC_SHA1.java:73) at net.oauth.signature.HMAC_SHA1.getSignature(HMAC_SHA1.java:39) at net.oauth.signature.OAuthSignatureMethod.getSignature(OAuthSignatureMethod.java:83) at net.oauth.signature.OAuthSignatureMethod.sign(OAuthSignatureMethod.java:54) at com.harcourt.hsp.utils.LTIUtil.generateSignature(LTIUtil.java:62) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.generateSignature(BaseLTIAction.java:238) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.execute(BaseLTIAction.java:96) at org.springframework.web.struts.DelegatingActionProxy.execute(DelegatingActionProxy.java:106) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:419) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1796) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:887) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:90) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1937) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:130) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:434) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:373) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:253) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminaters(NewConnectionInitialReadCallback.java:207) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:109) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1498) Thanks for your help. I'm sure it's a config that I'm missing.

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • NetApp FAS270 head doesn't see disks

    - by wfaulk
    I have an FAS270C. For months, I've been running it in a split-head manner (that is, with each head serving data totally independently, and without any clustering even being enabled) in order to facilitate moving some data around. I finally got everything situated, moved all the data to one of the heads, and was trying to get clustering set back up. Now when I try to install OnTap onto the "new" head, it cannot see any of the disks in the head shelf. (That is, the shelf into which the heads are inserted.) I've booted into maintenance mode, and it shows me that the 0b adapter, which should be the adapter that that shelf and its disks should be presented on, is in "OFFLINE (physical)" state. If I try to enable it with either "storage enable adapter 0b" or "fcadmin online 0b", it waits for about 30 seconds and then says: [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. [fci.adapter.online.failed:error]: Fibre Channel adapter 0b failed to come online. There is currently nothing attached to its external 0b port. I've tried it with and without an SFP plugged into it, and with and without its internal termination switch on. The currently active head can see those disks, and can see that two of them are assigned to the other head. Before I started reconfiguring, the "new" head could see disks on that shelf. They may even be the same disks that OnTap was installed on previously. Does anyone have any idea how to proceed?

    Read the article

  • Is 2 GB of RAM better than 2.5 GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • Why does my microwave kill the Wi-Fi?

    - by Ohlin
    Every time I start the microwave in the kitchen, our home Wi-Fi stops working and all devices lose connection with our router! The kitchen and the Wi-Fi router are in opposite ends of the apartment but devices are being used a little here and there. We've been annoyed by the instability of the Wi-Fi for some time and it wasn't until recently we realized it was correlated to microwave usage. After some testing with having the microwave on and off we could narrow down the problem to only occurring when the router is in b/g/n mode and uses a set channel. If I change to b/g mode or set channel to auto then there is no problem any more...but still! The router is a Zyxel P-661HNU ("802.11n Wireless ADSL2+ 4-port Security Gateway" with latest firmware) and the microwave is made by Neff with an effect of 1000W (if this information might be useful to anyone). There is an "internet connection" light on the router and it doesn't go out when the interruption occurs so I think this is only an internal Wi-Fi issue. Now to my questions: What parts of the Wi-Fi can possibly be affected by the microwave usage? Frequency? Disturbances in the electrical system? How can setting Auto on channels make a difference? I thought the different channels were just some kind of separation system within the same frequency spectrum? Could this be a sign that the microwave is malfunctioning and slowly roasting us all at home? Is there any need to be worried? Since we were able to find router settings that cooperate well with our microwave's demand for attention, this question is mainly out of curiosity. But as most people out there...I just can't help the fact that I need to know how it's possible :-)

    Read the article

  • Is 1GB + 1GB RAM better than 2GB +0.5GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • Google Chrome triple sa part de marché en un an et aide Firefox à devenir numéro un en Europe au détriment d'Internet Explorer

    Google Chrome triple sa part de marché en un an Et aide Firefox à devenir numéro un en Europe au détriment d'Internet Explorer Internet Explorer est détrôné en Europe, une première sur un telle zone géographique. La faute à Firefox, bien sûr. Mais aussi et surtout à Chrome de Google. Le titre du navigateur le plus utilisé en Europe revient donc à Mozilla Firefox (d'après les chiffres du mois de décembre publiés par StatCounter, spécialiste des statistiques web). Mais en analysant ces résultats de plus prêt, on remarque que la part de marché de Firefox n'a absolument pas augmenté entre les mois de novembre et de décembre. Le navigateur a même perdu 0.4 point ...

    Read the article

  • Graphics and USB devices freezing soon after OS loads

    - by Andrew
    I run Ubuntu/Windows dual boot. Last night I started the upgrade to Ubuntu 12.04, and my computer has not worked since in either Windows or Ubuntu. Here's what I got when I rebooted after the upgrade, and continue to get every time I boot: Gets to GRUB screen OK. Choose Ubuntu - black screen or crazy purple lines. At first I assumed something went wrong with the upgrade (often happens). Choose Windows - works fine, I log in, but soon after that the graphics freeze (sometimes with purple artifacts). The keyboard and mouse (both USB) also lose power at the same instant, and none of the USB ports have power to them. This happens sooner or later every time I boot. Update: the HDD also appears to lose power at the same point. I have tried a live CD, but my computer refuses to boot any CD even after disabling all other boot options in the BIOS. I have disconnected everything except keyboard, mouse, graphics card with one monitor, one RAM sick and HDD; no change. I also took the little battery out to reset CMOS. I am pretty sure no matter how wrong the Ubuntu upgrade went, it wouldn't cause the above symptoms in Windows. So the only explanation I can think of is that a hardware failure occurred at the same time. Some possible causes of this I can think of are: A couple of days before this, I added a third screen (which worked fine). About a week before, my house lost power in a storm (no ill effects over the past few days though). What can I do, other than buy a new motherboard/CPU and hope it works? Unfortunately I don't have another box to swap parts into to test at the moment.

    Read the article

  • Multi monitor setup with a tv as well?

    - by jasondavis
    I have always had a dual monitor setup for my pc's for years now. I am now in the market to build a new PC and get some new stuff. I need some help or advice on how to run 3 monitors WITH a 4th display which will be a lcd/plasma tv. So I am thinking I can get 2 video cards and that will give me all the hookups to run 3 monitors instead of 2. I will mount all 3 monitors in a row side by side and above them on the wall I would like to mount a larger 30-40+ inch lcd or plasma tv. I would then like to hook up my atelite/cable to this tv just as I would normally hook up a tv but I would like to also be able to have an option to view my PC o this tv as well. I know that is possible but would it be possible to view my PC on that tv and also still view my 3 other monitors and have the TV be a 4th display, where I could dock a different app/window in windows in all 4 displays (3 monitors + tv) ?? Please tell me any tips/advice on how to do this including what cables/software if any/converters/ you name it. Thanks for any help

    Read the article

  • USB to DVI adapter to get around two monitor limit of my graphics card?

    - by JavaJosh94
    My graphics card is an nVidia GTS 450 which will only run two monitors at a time, but I'd like to add a third one. Will I be able to do that using a USB to DVI adapter or would the limit still be there? Also I've been told the video quality is really bad with these adapters, so is the quality of the video output by these adapters okay for things like web browsing and working on office thing? And is it worth the $60 or would I be better off just buying a cheep second card?

    Read the article

  • Run 3 monitors on two different video cards?

    - by hullot
    Can I run 3 monitors on two different video cards? I have an ATI and Nvidia brand card. The ATI has 2 HDMI connections. They both work. Both cards are also picked up in Windows, one being the ATI and the other one as the Nvidia, but it says VGA Controller, although the card only takes 2 DVi. So, one DVI cable goes into that Nvidia card. 3 Monitors, but only 2 the HDMI ones from the ATI pick up, not the third one which is connected to the Nvidia via DVI. How can I run three monitors then? I suppose I can't install both drivers, so I'm unsure what to do. Is this possible? I just want the Nvidia card to power the third screen, no gaming on it, nothing. Also the ATI is picked up as primary card as well, so no hurdle there. EDIT: Hm, just installed the Nvidia drivers and it picked up the third screen no problem. Hope there aren't any major conflicts. Will post this as an answer as correct when I'm able. Can't as a new user.

    Read the article

  • Mapping of memory addresses to physical modules in Windows XP

    - by Josef Grahn
    I plan to run 32-bit Windows XP on a workstation with dual processors, based on Intel's Nehalem microarchitecture, and triple channel RAM. Even though XP is limited to 4 GB of RAM, my understanding is that it will function with more than 4 GB installed, but will only expose 4 GB (or slightly less). My question is: Assuming that 6 GB of RAM is installed in six 1 GB modules, which physical 4 GB will Windows actually map into its address space? In particular: Will it use all six 1 GB modules, taking advantage of all memory channels? (My guess is yes, and that the mapping to individual modules within a group happens in hardware.) Will it map 2 GB of address space to each of the two NUMA nodes (as each processor has it's own memory interface), or will one processor get fast access to 3 GB of RAM, while the other only has 1 GB? Thanks!

    Read the article

  • WCF deadlock when using callback channel

    - by mafutrct
    This is probably a simple mistake, but I could not figure out what was wrong. I basically got a method like this: [ServiceBehavior ( ConcurrencyMode = ConcurrencyMode.Reentrant, InstanceContextMode = InstanceContextMode.PerSession, IncludeExceptionDetailInFaults = true) ] public class Impl : SomeContract { public string Foo() { _CallbackChannel.Blah(); return ""; } } Its interface is decorated: [ServiceContract ( Namespace = "http://MyServiceInterface", SessionMode = SessionMode.Required, CallbackContract = typeof (WcfCallbackContract)) ] public interface SomeContract { [OperationContract] string Foo (); } The service is hosted like this: ServiceHost host = new ServiceHost (typeof (Impl)); var binding = new NetTcpBinding (); var address = new Uri ("net.tcp://localhost:8000/"); host.AddServiceEndpoint ( typeof (SomeContract), binding, address); host.Open (); The client implements the callback interface and calls Foo. Foo runs, calls the callback method and returns. However, the client is still struck in the call to Foo and never returns. The client callback method is never run. I guess I made a design mistake somewhere. If needed, I can post more code. Any help is appreciated.

    Read the article

  • channel factory null on debug?

    - by Garrith
    When I try to invoke a GetData contract using wcf rest in wcf test client mode I get this message: The Address property on ChannelFactory.Endpoint was null. The ChannelFactory's Endpoint must have a valid Address specified. at System.ServiceModel.ChannelFactory.CreateEndpointAddress(ServiceEndpoint endpoint) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at Service1Client.GetData(String value) This is the config file for the host: <system.serviceModel> <services> <service name="WcfService1.Service1" behaviorConfiguration="WcfService1.Service1Behavior"> <!-- Service Endpoints --> <endpoint address="http://localhost:26535/Service1.svc" binding="webHttpBinding" contract="WcfService1.IService1" behaviorConfiguration="webHttp" > <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfService1.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="webHttp"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> Code: [ServiceContract(Namespace = "")] public interface IService1 { //[WebInvoke(Method = "POST", UriTemplate = "Data?value={value}")] [OperationContract] [WebGet(UriTemplate = "/{value}")] string GetData(string value); [OperationContract] CompositeType GetDataUsingDataContract(CompositeType composite); // TODO: Add your service operations here } public class Service1 : IService1 { public string GetData(string value) { return string.Format("You entered: {0}", value); }

    Read the article

  • Dealing with security on IPC remoting channel

    - by leppie
    Hi I am trying to run a service under a different user account from the application that will access the service via remoting. While under the same account everything is fine, but as soon as I use different accounts, I get an access denied error while trying to open the IPC port. Is there something I am missing, as I can't see from the MSDN docs what is supposed to be done. Thanks

    Read the article

  • Getting AveragePower and PeakPower for a Channel in AVAudioRecorder

    - by Biranchi
    Hi all, I am annoyed with this piece of code. I am trying to get the averagePowerForChannel and peakPowerForChannel while recording Audio, but every time i am getting it as 0.0 Below is my code for recording audio : NSMutableDictionary *recordSetting =[[NSDictionary alloc] initWithObjectsAndKeys:[NSNumber numberWithFloat: 22050.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, [NSNumber numberWithInt:32],AVLinearPCMBitDepthKey, [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey, nil]; recorder1 = [[AVAudioRecorder alloc] initWithURL:[NSURL fileURLWithPath:audioFilePath] settings:recordSetting error:&err]; recorder1.meteringEnabled = YES; recorder1.delegate=self; [recorder1 prepareToRecord]; [recorder1 record]; levelTimer = [NSTimer scheduledTimerWithTimeInterval: 0.3f target: self selector: @selector(levelTimerCallback:) userInfo: nil repeats: YES]; - (void)levelTimerCallback:(NSTimer *)timer { [recorder1 updateMeters]; NSLog(@"Peak Power : %f , %f", [recorder1 peakPowerForChannel:0], [recorder1 peakPowerForChannel:1]); NSLog(@"Average Power : %f , %f", [recorder1 averagePowerForChannel:0], [recorder1 averagePowerForChannel:1]); } What is the error in the code ???

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >