Search Results

Search found 7359 results on 295 pages for 'triple channel ram'.

Page 27/295 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • my webserver with 16GB ram shows all RAM as used, but is it really, see the 'top'

    - by Alex
    I have some questions about my web server. Its a LAMP web server running centos 5.5 and php5, mysql5. The server gets hundreds (maybe thousand) of concurrent users during peak hours. I'm trying to optimize a little and understand "top". From what I can see: all 16GB of my ram have been used up? does that mean that my server needs more memory? My swap is only 2GB, should it be increased? usually during peak hours my server load average first number is about 2.5-3. What could I do to optimize the server so that the load average even during peak doesn't go above 1? In the past I was told a good working server should stay under 1 load, is this still true? Although even during load of 2.5-3, server pages and applications seem to load with pretty good speed. what should the memory size in php.ini be set to? top - 14:30:18 up 2 days, 12:41, 5 users, load average: 1.25, 1.74, 2.92 Tasks: 305 total, 2 running, 302 sleeping, 0 stopped, 1 zombie Cpu(s): 6.3%us, 0.9%sy, 0.0%ni, 92.5%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 16427200k total, 16111472k used, 315728k free, 3120316k buffers Swap: 2104496k total, 268k used, 2104228k free, 6216756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29080 apache 15 0 358m 36m 5192 S 20.2 0.2 2:08.40 httpd 29093 apache 18 0 357m 36m 5192 S 18.2 0.2 2:02.52 httpd 29079 apache 15 0 370m 49m 5832 S 10.0 0.3 2:32.14 httpd 1812 apache 15 0 370m 49m 5196 S 7.3 0.3 2:25.30 httpd 5204 apache 15 0 358m 36m 5168 S 5.3 0.2 0:59.28 httpd 29075 apache 15 0 370m 48m 5184 S 3.3 0.3 2:15.93 httpd 9712 apache 15 0 360m 38m 5180 S 3.0 0.2 0:54.81 httpd 29072 apache 16 0 358m 36m 5192 S 2.7 0.2 2:24.43 httpd 6310 apache 17 0 388m 67m 5180 S 2.3 0.4 0:58.85 httpd 8674 apache 15 0 343m 21m 4980 S 2.0 0.1 0:07.91 httpd 29085 apache 15 0 371m 49m 5224 S 2.0 0.3 2:16.86 httpd 29083 apache 15 0 370m 48m 5196 S 1.7 0.3 2:10.64 httpd 5575 apache 15 0 357m 36m 5228 S 1.3 0.2 0:53.78 httpd 29066 apache 15 0 379m 59m 5860 R 1.3 0.4 2:11.93 httpd 29078 apache 15 0 370m 48m 5188 S 1.3 0.3 2:14.52 httpd 29084 apache 15 0 370m 48m 5208 S 1.0 0.3 2:02.49 httpd 29089 apache 15 0 370m 48m 5188 S 1.0 0.3 2:27.61 httpd 29082 apache 15 0 390m 68m 5188 S 0.7 0.4 2:32.48 httpd 29984 apache 15 0 358m 36m 5228 S 0.7 0.2 2:08.32 httpd 3571 root 16 0 13400 1792 848 S 0.3 0.0 2:37.89 top 4419 mysql 15 0 668m 175m 7204 S 0.3 1.1 3:32.25 mysqld 28181 root 15 0 90460 3624 2680 S 0.3 0.0 0:17.60 sshd 29091 apache 15 0 390m 69m 5196 S 0.3 0.4 2:29.99 httpd 32476 root 15 0 12900 1320 848 R 0.3 0.0 0:06.46 top 1 root 15 0 10372 680 572 S 0.0 0.0 0:02.01 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.51 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:01.45 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.22 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.19 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.34 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.16 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/12

    Read the article

  • calling CreateFile, specifying FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE.

    - by alexander-daniels
    Before I describe my problem, here is a description of the program I'm writting: This is a C++ application. The purpose of my program is to create file on RAM memory. I read that if specify FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE when creating file it will be loaded direct to the RAM memory. One of blogs that talk about is this one: http://blogs.msdn.com/larryosterman/archive/2004/04/19/116084.aspx I have built a mini-program, but it not achieves the goal. Instead, it creates a file on hard-drive on directory I specify. Here's my program: void main () { LPCWSTR str = L"c:\temp.txt"; HANDLE fh = CreateFile(str,GENERIC_WRITE,0,NULL,CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE,NULL); if (fh == INVALID_HANDLE_VALUE) { printf ("Could not open TWO.TXT"); return; } DWORD dwBytesWritten; for (long i=0; i<20000000; i++) { WriteFile(fh, "This is a test\r\n", 16, &dwBytesWritten, NULL); } return; } I think there problem in CreateFile function, but I can't fix it. Please help me.

    Read the article

  • Itautec Accelerates Profitable High Tech Customer Service

    - by charles.knapp
    Itautec is a Brazilian-based global high technology products and services firm with strong performance in the global market of banking and commercial automation, with more than 2,300 global clients. It recently deployed Siebel CRM for sales, customer support, and field service. In the first year of use, Siebel CRM enabled a 30% growth in services revenue. Siebel CRM also reduced support costs. "Oracle's Siebel CRM has minimized costs and made our customer service more agile," said Adriano Rodrigues da Silva, IT Manager. "Before deployment, 95% of our customer service contacts were made by phone. Siebel CRM made it possible to expand' choices, so that now 55% of our customers contact our helpdesk through the newer communications channels." Read more here about Itautec's success, and learn more here about how Siebel CRM can help your firm to grow customer service revenues, improve service levels, and reduce costs.

    Read the article

  • OPN PartnerCasts com Stein Surlin

    - by Claudia Costa
      Desde de Abril 2009  Stein Surlin - Senior VP Alliances& Channels EMEA, tem realizado diversas  entrevistas com executivos da Oracle EMEA, em que se têm discutido temas acerca da organização Oracle EMEA, de como está alinhada em trabalhar em conjunto com os seus  parceiros e em apoiá-los na obtenção de novas oportunidades de negócio . Neste link poderá assistir aos OPN PartnerCasts. Aqui poderá encontrar as notícias mais actualizadas sobre a estratégia de parceiros da Oracle, oportunidades de negócio, estratégias de produtos e muito mais! Fique atento ao Portal OPN Partner Casts, mais podcasts estão a caminho!        

    Read the article

  • How to reduce RAM consumption when my server is idle

    - by Julien Genestoux
    We use Slicehost, with 512MB instances. We run Ubuntu 9.10 on them. I installed a few packages, and I'm now trying to optimize RAM consumption before running anything on there. A simple ps gives me the list of running processes : # ps faux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2 0.0 0.0 0 0 ? S< Jan04 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S< Jan04 0:15 \_ [migration/0] root 4 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/0] root 6 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [events/0] root 7 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [cpuset] root 8 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [khelper] root 9 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [async/mgr] root 10 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenwatch] root 11 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenbus] root 13 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/1] root 14 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/1] root 15 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/1] root 16 0.0 0.0 0 0 ? S< Jan04 0:07 \_ [events/1] root 17 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/2] root 18 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/2] root 19 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/2] root 20 0.0 0.0 0 0 ? R< Jan04 0:07 \_ [events/2] root 21 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [migration/3] root 22 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/3] root 23 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/3] root 24 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [events/3] root 25 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/0] root 26 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/1] root 27 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/2] root 28 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/3] root 29 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [kblockd/0] root 30 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/1] root 31 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/2] root 32 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/3] root 33 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kseriod] root 34 0.0 0.0 0 0 ? S Jan04 0:00 \_ [khungtaskd] root 35 0.0 0.0 0 0 ? S Jan04 0:05 \_ [pdflush] root 36 0.0 0.0 0 0 ? S Jan04 0:06 \_ [pdflush] root 37 0.0 0.0 0 0 ? S< Jan04 1:02 \_ [kswapd0] root 38 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/0] root 39 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/1] root 40 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/2] root 41 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/3] root 42 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsIO] root 43 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 44 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 45 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 46 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 47 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsSync] root 48 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfs_mru_cache] root 49 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/0] root 50 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/1] root 51 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/2] root 52 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/3] root 53 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/0] root 54 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/1] root 55 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/2] root 56 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/3] root 57 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/0] root 58 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/1] root 59 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/2] root 60 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/3] root 61 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 62 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 63 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 64 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 65 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 66 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 67 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 68 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 69 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 70 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 71 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/0] root 72 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/1] root 73 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/2] root 74 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/3] root 77 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/0] root 78 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/1] root 79 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/2] root 80 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/3] root 81 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/0] root 82 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/1] root 83 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/2] root 84 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/3] root 310 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kstriped] root 315 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksnapd] root 1452 0.0 0.0 0 0 ? S< Jan04 4:31 \_ [kjournald] root 1 0.0 0.1 19292 948 ? Ss Jan04 0:15 /sbin/init root 1545 0.0 0.1 13164 1064 ? S Jan04 0:00 upstart-udev-bridge --daemon root 1547 0.0 0.1 17196 996 ? S<s Jan04 0:00 udevd --daemon root 1728 0.0 0.2 20284 1468 ? S< Jan04 0:00 \_ udevd --daemon root 1729 0.0 0.1 17192 792 ? S< Jan04 0:00 \_ udevd --daemon root 1881 0.0 0.0 8192 152 ? Ss Jan04 0:00 dd bs=1 if=/proc/kmsg of=/var/run/rsyslog/kmsg syslog 1884 0.0 0.2 185252 1200 ? Sl Jan04 1:00 rsyslogd -c4 103 1894 0.0 0.1 23328 700 ? Ss Jan04 1:08 dbus-daemon --system --fork root 2046 0.0 0.0 136 32 ? Ss Jan04 4:05 runsvdir -P /etc/service log: gems/custom_require.rb:31:in `require'??from /mnt/app/superfeedr-firehoser/current/script/component:52?/opt/ruby-enterprise/lib/ruby/si root 2055 0.0 0.0 112 32 ? Ss Jan04 0:00 \_ runsv chef-client root 2060 0.0 0.0 132 40 ? S Jan04 0:02 | \_ svlogd -tt ./main root 2056 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_2 root 2059 0.0 0.0 132 40 ? S Jan04 0:29 | \_ svlogd /var/log/superfeedr-firehoser_2 root 2057 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_1 root 2062 0.0 0.0 132 44 ? S Jan04 0:26 \_ svlogd /var/log/superfeedr-firehoser_1 root 2058 0.0 0.0 18708 316 ? Ss Jan04 0:01 cron root 2095 0.0 0.1 49072 764 ? Ss Jan04 0:06 /usr/sbin/sshd root 9832 0.0 0.5 78916 3500 ? Ss 00:37 0:00 \_ sshd: root@pts/0 root 9846 0.0 0.3 17900 2036 pts/0 Ss 00:37 0:00 \_ -bash root 10132 0.0 0.1 15020 1064 pts/0 R+ 09:51 0:00 \_ ps faux root 2180 0.0 0.0 5988 140 tty1 Ss+ Jan04 0:00 /sbin/getty -8 38400 tty1 root 27610 0.0 1.4 47060 8436 ? S Apr04 2:21 python /usr/sbin/denyhosts --daemon --purge --config=/etc/denyhosts.conf --config=/etc/denyhosts.conf root 22640 0.0 0.7 119244 4164 ? Ssl Apr05 0:05 /usr/sbin/console-kit-daemon root 10113 0.0 0.0 3904 316 ? Ss 09:46 0:00 /usr/sbin/collectdmon -P /var/run/collectdmon.pid -- -C /etc/collectd/collectd.conf root 10114 0.0 0.2 201084 1464 ? Sl 09:46 0:00 \_ collectd -C /etc/collectd/collectd.conf -f As you can see there is nothing serious here. If I sum up the RSS line on all this, I get the following : # ps -aeo rss | awk '{sum+=$1} END {print sum}' 30096 Which makes sense. However, I have a pretty big surprise when I do a free: # free total used free shared buffers cached Mem: 591180 343684 247496 0 25432 161256 -/+ buffers/cache: 156996 434184 Swap: 1048568 0 1048568 As you can see 60% of the available memory is already consumed... which leaves me with only 40% to run my own applications if I want to avoid swapping. Quite disapointing! 2 questions arise : Where is all this memory? How to take some of it back for my own apps?

    Read the article

  • Protocol specific channel handlers

    - by Mickael Marrache
    I'm writing an application server that will receive SIP and DNS messages from the network. When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it. To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer. Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler? To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is: java.lang.Object org.jboss.netty.channel.SimpleChannelUpstreamHandler org.jboss.netty.handler.codec.frame.FrameDecoder org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State> org.jboss.netty.handler.codec.http.HttpMessageDecoder org.jboss.netty.handler.codec.http.HttpRequestDecoder Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both? Thanks

    Read the article

  • How to specify MQ channel table location for .net web application using web.config

    - by Matt
    I've been going around in circles for a while on this one now. I'm trying to connect to a distributed queue manager using a supplied channel table file. I can get this to work if I specify the environmental variable MQCHLLIB and MQCHLTAB on my server. However the IBM documentation states that the .net config file can override these variables. Here is what I have placed in my web.config file: ... <configSections> <section name="CHANNELS" type="System.Configuration.NameValueSectionHandler" /> </configSections> <CHANNELS> <add key="ChannelDefinitionDirectory" value="C:\temp"></add> <add key="ChannelDefinitionFile" value="DSM_MOM_TEST.tab"></add> </CHANNELS> ... And here is the code that is executing: Hashtable properties = new Hashtable(); //Add managed connection type to parameters. const String connectionType = MQC.TRANSPORT_MQSERIES_CLIENT; properties.Add(MQC.TRANSPORT_PROPERTY, connectionType); return new MQQueueManager(queueManagerName, properties); queueManagerName is set to the generic queue manager "*Q101T". However this isn't working and I get an error returned: 2058 MQRC_Q_MGR_NAME_ERROR I've been unable to find any more documentation on how to get this to work other than the environmental variables and the standard mqclient.ini should be overriden by the channels stanza in the web.config. Is there something that I've missed in the code? Any tips would be greatly appreciated.

    Read the article

  • workaround for ORA-03113: end-of-file on communication channel

    - by Jefferstone
    The call to TEST_FUNCTION below fails with "ORA-03113: end-of-file on communication channel". A workaround is presented in TEST_FUNCTION2. I boiled down the code as my actual function is far more complex. Tested on Oracle 11G. Anyone have any idea why the first function fails? CREATE OR REPLACE TYPE "EMPLOYEE" AS OBJECT ( employee_id NUMBER(38), hire_date DATE ); CREATE OR REPLACE TYPE "EMPLOYEE_TABLE" AS TABLE OF EMPLOYEE; CREATE OR REPLACE FUNCTION TEST_FUNCTION RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; SELECT CAST(MULTISET ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION; CREATE OR REPLACE FUNCTION TEST_FUNCTION2 RETURN EMPLOYEE_TABLE IS table1 EMPLOYEE_TABLE; table2 EMPLOYEE_TABLE; return_table EMPLOYEE_TABLE; BEGIN SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) < 'm' ) AS EMPLOYEE_TABLE) INTO table1 FROM dual; SELECT CAST(MULTISET ( SELECT user_id, created FROM all_users WHERE LOWER(username) >= 'm' ) AS EMPLOYEE_TABLE) INTO table2 FROM dual; WITH combined AS ( SELECT employee_id, hire_date FROM TABLE(table1) UNION SELECT employee_id, hire_date FROM TABLE(table2) ) SELECT CAST(MULTISET ( SELECT * FROM combined ) AS EMPLOYEE_TABLE) INTO return_table FROM dual; RETURN return_table; END TEST_FUNCTION2; SELECT * FROM TABLE (TEST_FUNCTION()); -- Throws exception ORA-03113. SELECT * FROM TABLE (TEST_FUNCTION2()); -- Works

    Read the article

  • SSH X11 not working

    - by azat
    I have a home and work computer, the home computer has a static IP address. If I ssh from my work computer to my home computer, the ssh connection works but X11 applications are not displayed. In my /etc/ssh/sshd_config at home: X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes At work I have tried the following commands: xhost + home HOME_IP ssh -X home ssh -X HOME_IP ssh -Y home ssh -Y HOME_IP My /etc/ssh/ssh_config at work: Host * ForwardX11 yes ForwardX11Trusted yes My ~/.ssh/config at work: Host home HostName HOME_IP User azat PreferredAuthentications password ForwardX11 yes My ~/.Xauthority at work: -rw------- 1 azat azat 269 Jun 7 11:25 .Xauthority My ~/.Xauthority at home: -rw------- 1 azat azat 246 Jun 7 19:03 .Xauthority But it doesn't work After I make an ssh connection to home: $ echo $DISPLAY localhost:10.0 $ kate X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. kate: cannot connect to X server localhost:10.0 I use iptables at home, but I've allowed port 22. According to what I've read that's all I need. UPD. With -vvv ... debug2: callback start debug2: x11_get_proto: /usr/bin/xauth list :0 2/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 1: request x11-req confirm 1 debug2: client_session2_setup: id 1 debug2: fd 3 setting TCP_NODELAY debug2: channel 1: request pty-req confirm 1 ... When try to launch kate: debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 55486 debug2: fd 8 setting O_NONBLOCK debug3: fd 8 is O_NONBLOCK debug1: channel 2: new [x11] debug1: confirm x11 debug2: X11 connection uses different authentication protocol. X11 connection rejected because of wrong authentication. debug2: X11 rejected 2 i0/o0 debug2: channel 2: read failed debug2: channel 2: close_read debug2: channel 2: input open - drain debug2: channel 2: ibuf empty debug2: channel 2: send eof debug2: channel 2: input drain - closed debug2: channel 2: write failed debug2: channel 2: close_write debug2: channel 2: output open - closed debug2: X11 closed 2 i3/o3 debug2: channel 2: send close debug2: channel 2: rcvd close debug2: channel 2: is dead debug2: channel 2: garbage collecting debug1: channel 2: free: x11, nchannels 3 debug3: channel 2: status: The following connections are open: #1 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1) #2 x11 (t7 r2 i3/0 o3/0 fd 8/8 cc -1) # The same as above repeate about 7 times kate: cannot connect to X server localhost:10.0 UPD2 Please provide your Linux distribution & version number. Are you using a default GNOME or KDE environment for X or something else you customized yourself? azat:~$ kded4 -version Qt: 4.7.4 KDE Development Platform: 4.6.5 (4.6.5) KDE Daemon: $Id$ Are you invoking ssh directly on a command line from a terminal window? What terminal are you using? xterm, gnome-terminal, or? How did you start the terminal running in the X environment? From a menu? Hotkey? or ? From terminal emulator `yakuake` Manualy press `Ctrl + N` and write commands Can you run xeyes from the same terminal window where the ssh -X fails? `xeyes` - is not installed But `kate` or another kde app is running Are you invoking the ssh command as the same user that you're logged into the X session as? From the same user UPD3 I also download ssh sources, and using debug2() write why it's report that version is different It see some cookies, and one of them is empty, another is MIT-MAGIC-COOKIE-1

    Read the article

  • C# class code loaded in RAM ?

    - by Spi1988
    hi, I would like to know whether the actual code of a C# class gets loaded in RAM when you instantiate the class? So for example if I have 2 Classes CLASS A , CLASS B, where class A has 10000 lines of code but just 1 field, an int. And class B has 10 lines of code and also 1 field an int as well. If I instantiate Class A will it take more RAM than Class B due to its lines of code ? A supplementary question, If the lines of code are loaded in memory together with the class, will they be loaded for every instance of the class? or just once for all the instances? Thanks in advance.

    Read the article

  • Are there pitfalls to using incompatible RAM (frequencies) in motherboards?

    - by osij2is
    I'd like to use 2 x 4GB DDR3 1600 dimms in a motherboard capable of only DDR3 1066. The DDR3 1600 is on sale and the cost is identical to 1066 dimms. It'd be nice to have these faster sticks around should i upgrade the motherboard. I assume the RAM can under clock itself or be changed in the BIOS. While obviously it's less than ideal situation, I don't know if there are other unintended consequences in terms of stability, performance and longevity of the board and said RAM. Am I doing any damage to the memory controller or RAM? I've always bought RAM at the max speed specified for the motherboard and I've never gone over so I'm not sure if there any caveats to this at all. Edit: I intend to use the RAM in pairs. I know that mixing RAM speeds is just a bad idea.

    Read the article

  • RHN Satellite / Spacewalk custom channels, best practice?

    - by tore-
    Hi, I'm currently setting up RHN Satellite, and all works well. I'm in the process of creating custom channels, since we have certain software which should be available for all nodes of satellite, e.g. puppet, facter, subversion, php (newer version than present in base). I've tried to find documentation on best practices on this. How should they be set up, how to handle different arch, how to handle noarch packages. How to sync updates to dependencies when updating a custom package in a custom channel (e.g. php is updated, how to fetch all updated dependencies). The channel management documentation from RHEL (http://www.redhat.com/docs/en-US/Red_Hat_Network_Satellite/5.3/Channel_Management_Guide/html/Channel_Management_Guide-Custom_Channel_and_Package_Management.html) doesn't provide me with enough information on how to solve any of theese issues. All tips, tricks and information regarding this would be great!

    Read the article

  • How much RAM 64bit Windows 8 reserves to OS internal use?

    - by Barleyman
    Windows reserves some memory for it's internal use which is not normally allocated to applications. This reserve is seen most easily if you run without a page file or limit the pagefile to relatively small size (such as 3GB). Windows will allocate primarily RAM up to the limit, fill up remaining free space in the page file (if any) and issue a low memory warning when there is no page file space left and the allocated RAM limit is exceeded. The limit appears to be a percentage of the total system RAM. Windows 7 x64 limit is discussed here and methods for circumventing the "low memory warning" is discussed here. Disabling the low memory warning has some advantages - You can use some 600MB more RAM on 8GB machine) But there is a serious disadvantage - When you're out of ram, programs will crash. How much RAM can you allocate on 8GB Windows 8 x64 before you get the low memory warning? Is it possible to adjust the warning threshold?

    Read the article

  • How to monitor RAM usage for Hyper-V VMs ?

    - by Mac
    A bit of context first : on Windows 2008 Standard x64 with 8Gb RAM, I have 5 VMs running which should take up 1664Mb RAM (3*256Mb+384Mb+512Mb). There is nothing else running on this server except the basic OS components (this not a Core installation). I know that each VM will use more RAM on the host than what has been configured in Hyper-V. But when I run the task manager, it says 6.7Gb used ! If I sum up the RAM used by each process in the task manager (showing all users processes), I get to something around 1Gb... So : how can I check how much RAM each VM is really using on the host (it does not seem to be available via task manager) ? Note that I am aware of the fact that my problem could be unrelated to VM RAM usage, but I would still very much like to know how to do this.

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • WCF service using duplex channel in different domains

    - by ds1
    I have a WCF service and a Windows client. They communicate via a Duplex WCF channel which when I run from within a single network domain runs fine, but when I put the server on a separate network domain I get the following message in the WCF server trace... The message with to 'net.tcp://abc:8731/ActiveAreaService/mex/mex' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. So, it looks like the communication just work in one direction (from client to server) if the components are in two separate domains. The Network domains are fully trusted, so I'm a little confused as to what else could cause this? Server app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="JobController.ActiveAreaBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="JobController.ActiveAreaBehavior" name="JobController.ActiveAreaServer"> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://SERVER:8731/ActiveAreaService/" /> </baseAddresses> </host> </service> </services> </system.serviceModel> </configuration> but I also add an end point programmatically in Visual C++ host = gcnew ServiceHost(ActiveAreaServer::typeid); NetTcpBinding^ binding = gcnew NetTcpBinding(); binding->MaxBufferSize = Int32::MaxValue; binding->MaxReceivedMessageSize = Int32::MaxValue; binding->ReceiveTimeout = TimeSpan::MaxValue; binding->Security->Mode = SecurityMode::Transport; binding->Security->Transport->ClientCredentialType = TcpClientCredentialType::Windows; ServiceEndpoint^ ep = host->AddServiceEndpoint(IActiveAreaServer::typeid, binding, String::Empty); // Use the base address Client app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IActiveAreaServer" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://SERVER:8731/ActiveAreaService/" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IActiveAreaServer" contract="ActiveArea.IActiveAreaServer" name="NetTcpBinding_IActiveAreaServer"> <identity> <userPrincipalName value="[email protected]" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> Any help is appreciated! Cheers

    Read the article

  • "Could not establish secure channel for SSL/TLS" in .NET CF application on smart phone

    - by Stefan Mohr
    I have a stubborn communications issue with an application running on the .NET Compact Framework 3.5 on Windows Mobile smartphones. I am constructing a web request using this code: UTF8Encoding encoding = new System.Text.UTF8Encoding(); byte[] Data = encoding.GetBytes(HttpUtility.ConstructQueryString(parameters)); httpRequest = WebRequest.Create((domain)) as HttpWebRequest; httpRequest.Timeout = 10000000; httpRequest.ReadWriteTimeout = 10000000; httpRequest.Credentials = CredentialCache.DefaultCredentials; httpRequest.Method = "POST"; httpRequest.ContentType = "application/x-www-form-urlencoded"; httpRequest.ContentLength = Data.Length; Stream SendReq = httpRequest.GetRequestStream(); SendReq.Write(Data, 0, Data.Length); SendReq.Close(); HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); return httpResponse.GetResponseStream(); The web service functions by receiving a JSON-encoded document as part of the URL (eg. https://site.com/ws/sync??document={"version":"1.0.0","items":[{"item_1":"item1"}]}&user=usr&password=pw), and as a response receives another JSON document as response data. This code runs fine on all emulators and PDAs running WM 5 and 6. We have seen an issue with a couple of customers running Treo smartphones (and only on the Sprint network). We have tested the code on an identical device on the AT&T network (via DeviceAnywhere) and once again the code worked as we expected. This has to be some sort of security policy on the phone, but we've been unable to determine a workaround or diagnose it thoroughly as we cannot reproduce it in house and have had to resort to getting users to assist with running test drivers for us. When this code executes, the user's device throws the following exception: System.Net.WebException Could not establish secure channel for SSL/TLS Stack trace: at System.Net.HttpWebRequest.finishGetRequestStream() at System.Net.HttpWebRequest.GetRequestStream() at OurApp.GetResponseStream(String domain, Hashtable parameters) inner exception: System.IO.IOException Authentication failed because the remote party has closed the transport stream. Stack trace: at System.Net.SslConnectionState.ClientSideHandshake() at System.Net.SslConnectionState.PerformClientHandShake() at System.Net.Connection.connect(Object ignored) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() Examining the server Apache logs shows no hits from the user's IP - I don't think the device is even attempting to send a packet before failing. If relevant, the server is running Apache on Linux and is written using the TurboGears Python framework. The server certificate is issued by a CA and is still valid. The test driver where this error was copied from was not code signed, however the same error (without the error messages) is signed with a GeoTrust certificate so we don't believe this is a code signing issue. The application installs and launches without issue on all phones - it's just establishing this SSL connection that is breaking for these users. One significant issue in troubleshooting is that there is a substantial inconvenience each time we try out a solution (need to find a "volunteer" customer), so we're really looking for a silver bullet or a better understanding of the handshaking process so we can be reasonably confident we only need to ask the user to test it one or two more times. One final mention: we have tried the sync both over ActiveSync and also over GPRS with identical results. Any thoughts would be greatly appreciated!

    Read the article

  • 4GB of RAM in MacOSX 10.5, only 3GB in MacOSX 10.6

    - by Albert
    Hi, I was using MacOSX 10.5 on my MacBook until today and I had 4GB of memory there. Now I have updated to MacOSX 10.6 and it only displays 3GB. Why is that? How can I fix it? Also, I am a bit wondering why most people (well, most of the Google hits explained the 3GB issue that way -- leaving out the fact that it has worked earlier) are saying that a 32bit system can under no circumstances access more than 3.2GB. Don't we have PAE nowadays in most systems? Thanks, Albert

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

  • Monitor RAM usage on CentOS and restart Apache at a certain usage

    - by Chris
    Hi, I'm running a CentOS 5.3 server with a basic LAMP stack. I've optimized LAMP and my code to run efficiently as possible, but Apache has a memory leak somewhere that kills my server every hour or so. What is the best way to write a script that will monitor the memory usage and if it peaks over, say, 450MB kill all the Apache processes and restart Apache. I know C++/PHP and basic Linux server administration but I'm not familiar with Perl or bash scripting. I'd be open to learn any solutions, though, as a temporary solution while I find the issue.

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • Database Server Hardware components (order of importance), CPU speed VS CPU cache vs RAM vs DISK

    - by nulltorpedo
    I am new to database world and would like to know what are crucial hardware specs when it comes to database performance. I have searched the internet and found this so far (In order of decreasing importance): 1) Hard Disk: Get an SSD basically (much more IOPS than spinners) 2) Memory: Get as much as you can afford 3) CPU: For the same $ spent, prefer larger cache size over speed. Are these findings sensible? EDIT: I would like to focus on CPU speed VS CPU cache size. EDIT2: The database is used to store some combination of ints and int arrays with few text fields. There are a lot of Select queries looking for existing entries. If entry is not found, then insert it. I would say most of processing would be trying to find a match across a table with 200 columns and 20k rows. The insert statements are very few. EDIT3: Also, we have a lot of views (basically select queries).

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >