Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 1065/1257 | < Previous Page | 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072  | Next Page >

  • A step-up from TiddlyWiki that is still 100% portable?

    - by Smandoli
    TiddlyWiki is a great idea, brilliantly implemented. I'm using it as a portable personal "knowledge manager," and these are the prize virtues: It travels on my USB flash memory stick and runs on any computer, regardless of operating system No software installation is needed on the computer (TiddlyWiki merely uses the Internet browser) No Internet connection is needed In terms of data retrieval functionality, it mimics a relational database (use of tags and internal links) Let's say I've got a million words of prose in 4,000 tiddlers (posts). I'm still testing, but it looks like TiddlyWiki gets very slow. Is there an app like TiddlyWiki that keeps all the virtues I listed above, and allows more storage? NOTE: Separation of content and presentation would be ideal. It's nifty that TiddlyWiki has everything in a single HTML document, but it's unhelpful in many ways. I don't care if a directory of assorted docs is needed (SQLite, XML?), as long as it's functionally self-contained.

    Read the article

  • Cannot SSH after resetting firewall on VPS

    - by Thomas Buckley
    I'm having trouble trying to SSH to my Debian 5 VPS with blacknight. It was working fine until I did the following: Logged into 'Parallels Infrastructure Manager' - Container - Firewall - Set to 'Normal Firewall settings'. It told me there was an error with the IPTables and offered the option again with a checkbox to 'reset' firewall settings, I selected this. I can see that that the default rules are been applied ( anything from anyone on any port and allowing anything to happen). Whenever I attempt to SSH I get the following debug info: thomas@localmachine:~/.ssh$ ssh -v thomas@hostname OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hostname [***********] port 22. debug1: Connection established. debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ************************************* debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). I had my public/private RSA keys set up and working fine before I reset the firewall settings. I had also made the following changes to my /etc/ssh/sshd_config file on the VPS: PermitRootLogin no PasswordAuthentication no X11Forwarding no UsePAM no UseDNS no AllowUsers thomas Could it be something to do with the SSH server & client having different versions between my local machine and VPS? Any help appreciated. Output with ssh -vvv thomas@localcomputer:~/.ssh$ ssh -vvv thomas@**************** OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ************ [*************] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/thomas/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "*****************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 127/256 debug2: bits set: 498/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA *********************************************************** debug3: load_hostkeys: loading entries for host "*********************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '****************' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:1 debug2: bits set: 516/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/thomas/.ssh/id_rsa (0x7fa7028b6010) debug2: key: /home/thomas/.ssh/id_dsa ((nil)) debug2: key: /home/thomas/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug3: no such identity: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug3: no such identity: /home/thomas/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). sshd_config # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) C hallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no UseDNS no AllowUsers thomas Thanks

    Read the article

  • Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition "expected behavior"?

    - by Jeremy Friesner
    My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch. This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine. However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues. My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns? My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below) My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time. Which of us is right? Embedded-PC-failsafe:~# ls Embedded-PC-failsafe:~# umount /mnt/unionfs Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Invalid inode number for '.' in directory inode 46948. Fix<y>? yes Directory inode 46948, block 0, offset 12: directory corrupted Salvage<y>? yes Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Couldn't fix parent of inode 46948: Couldn't find parent directory entry Pass 4: Checking reference counts Unattached inode 46945 Connect to /lost+found<y>? yes Inode 46945 ref count is 2, should be 1. Fix<y>? yes Inode 46953 ref count is 5, should be 4. Fix<y>? yes Pass 5: Checking group summary information Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517 Fix<y>? yes Free blocks count wrong for group #6 (17247, counted=17611). Fix<y>? yes Free blocks count wrong (161691, counted=162055). Fix<y>? yes Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096) Fix<y>? yes Free inodes count wrong for group #6 (7608, counted=7624). Fix<y>? yes Free inodes count wrong (61919, counted=61935). Fix<y>? yes embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: ********** WARNING: Filesystem still has errors ********** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Directory entry for '.' in ... (46948) is big. Split<y>? yes Missing '..' in directory inode 46948. Fix<y>? yes Setting filetype for entry '..' in ... (46948) to 2. Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Pass 4: Checking reference counts Inode 2 ref count is 12, should be 13. Fix<y>? yes Pass 5: Checking group summary information embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks

    Read the article

  • Installing Oracle 11gR2 on RHEL 6.2

    - by Chris
    Hello all I'm having some difficulty installing Oracle 11gR2 on RHEL 6.2 I have compiled a giant list of every single step I have taken so far I installed RHEL 6.2 on VMWARE it did it's easy install automatically I Selected 4gb of memory Selected max size of 80Gb Selected 2 processors Sorry for the bad styling copy paste isn't working correctly The version of oracle i downloaded is Linux x86-64 11.2.0.1 I am installing this on a local machine NOT a remote machine I followed the following documentation http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm I bolded the steps which I was least sure about from my research Easy installed with RHEL 6.2 for VMWARE Registered with red hat so I can get updates Reinstalled vmware-tools by pressing enter at every choice Sudo yum update at the end something about GPG key selected y then y Checked Memory Requirements grep MemTotal /proc/meminfo MemTotal: 3921368 kb uname -m x86_64 grep SwapTotal /proc/meminfo SwapTotal: 6160376 kb free total used free shared buffers cached Mem: 3921368 2032012 1889356 0 76216 1533268 -/+ buffers/cache: 422528 3498840 Swap: 6160376 0 6160376 df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 1.9G 276K 1.9G 1% /dev/shm df -h /tmp Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / tmpfs 1.9G 276K 1.9G 1% /dev/shm /dev/sda1 291M 58M 219M 21% /boot All looked fine to me except maybe for swap? Software Requirements cat /proc/version Linux version 2.6.32-220.el6.x86_64 ([email protected]) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:03:13 EST 2011 uname -r 2.6.32-220.el6.x86_64 (same as above but whatever) According to the tutorial should be On Red Hat Enterprise Linux 6 2.6.32-71.el6.x86_64 or later These are the versions of software I have installed binutils-2.20.51.0.2-5.28.el6.x86_64 compat-libcap1-1.10-1.x86_64 compat-libstdc++-33-3.2.3-69.el6.x86_64 compat-libstdc++-33.i686 0:3.2.3-69.el6 gcc-4.4.6-3.el6.x86_64 gcc-c++.x86_64 0:4.4.6-3.el6 glibc-2.12-1.47.el6_2.12.x86_64 glibc-2.12-1.47.el6_2.12.i686 glibc-devel-2.12-1.47.el6_2.12.x86_64 glibc-devel.i686 0:2.12-1.47.el6_2.12 ksh.x86_64 0:20100621-12.el6_2.1 libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.x86_64 libstdc++.i686 0:4.4.6-3.el6 libstdc++-devel.i686 0:4.4.6-3.el6 libstdc++-devel-4.4.6-3.el6.x86_64 libaio-0.3.107-10.el6.x86_64 libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6.x86_64 libaio-devel-0.3.107-10.el6.i686 make-3.81-19.el6.x86_64 sysstat-9.0.4-18.el6.x86_64 unixODBC-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.i686 unixODBC-2.2.14-11.el6.i686 8. Probably screwed up here or step 9 /usr/sbin/groupadd oinstall /usr/sbin/groupadd dba(not sure why this isn't in the tutorial) /usr/sbin/useradd -g oinstall -G dba oracle passwd oracle /sbin/sysctl -a | grep sem Xkernel.sem = 250 32000 32 128 /sbin/sysctl -a | grep shm kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 vm.hugetlb_shm_group = 0 /sbin/sysctl -a | grep file-max Xfs.file-max = 384629 /sbin/sysctl -a | grep ip_local_port_range Xnet.ipv4.ip_local_port_range = 32768 61000 /sbin/sysctl -a | grep rmem_default Xnet.core.rmem_default = 124928 /sbin/sysctl -a | grep rmem_max Xnet.core.rmem_max = 131071 /sbin/sysctl -a | grep wmem_max Xnet.core.wmem_max = 131071 /sbin/sysctl -a | grep wmem_default Xnet.core.wmem_default = 124928 Here is my sysctl.conf file I only added the items that were bigger: Kernel sysctl configuration file for Red Hat Linux # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and sysctl.conf(5) for more details. Controls IP packet forwarding net.ipv4.ip_forward = 0 Controls source route verification net.ipv4.conf.default.rp_filter = 1 Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 Controls whether core dumps will append the PID to the core filename. Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 /sbin/sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 su - oracle ulimit -Sn 1024 ulimit -Hn 1024 ulimit -Su 1024 ulimit -Hu 30482 ulimit -Su 1024 ulimit -Ss 10240 ulimit -Hs unlimited su - nano /etc/security/limits.conf *added to the end of the file * oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 exit exit su - mkdir -p /app/ chown -R oracle:oinstall /app/ chmod -R 775 /app/ 9. THIS IS PROBABLY WHERE I MESSED UP I then exited out of the root account so now I'm back in my account chris then I su - oracle echo $SHELL /bin/bash umask 0022 (so it should be set already to what is neccesary) Also from what I have read I do not need to set the DISPLAY variable because I'm installing this on the localhost I then opened the .bash_profile of the oracle and changed it to the following .bash_profile Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi User specific environment and startup programs PATH=$PATH:$HOME/bin; export PATH ORACLE_BASE=/app/oracle ORACLE_SID=orcl export ORACLE_BASE ORACLE_SID I then shutdown the virtual machine shared my desktop folder from my windows 7 then turned back on the virtual machine logged in as chris opened up a terminal then: su - for some reason the shared folder didn't appear so I reinstalled vmware tools again and restarted then same as before su - cp -R linux_oracle/database /db; chown -R oracle:oinstall /db; chmod -R 775 /db; ll /db drwxrwxr-x. 8 oracle oinstall 4096 Jun 5 06:20 database exit su - oracle cd /db/database ./runInstaller AND FINALLY THE INFAMOUS JAVA:132 ERROR MESSAGE Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 65646 MB Passed Checking swap space: must be greater than 150 MB. Actual 6015 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-06-05_06-47-12AM. Please wait ...[oracle@localhost database]$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/OraInstall2012-06-05_06-47-12AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1647) at java.lang.Runtime.load0(Runtime.java:769) at java.lang.System.load(System.java:968) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1668) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993) at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Toolkit.java:1509) at java.awt.Toolkit.(Toolkit.java:1530) at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source) at com.jgoodies.looks.LookUtils.(Unknown Source) at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:242) at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783) at javax.swing.UIManager.setLookAndFeel(UIManager.java:480) at oracle.install.commons.util.Application.startup(Application.java:758) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:114) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:132)

    Read the article

  • Tomcat IIS 7 Integration gives 503 errors on all requests.

    - by Yvan JANSSENS
    Hi, After many attempts to install Tomcat on IIS 7, I finally managed to get it working. At least I think so :-S. I finally got the 500 errors away, by setting the correct permissions. The only thing that doesn't work is ... serving stuff: neither regular stuff (like ASP, HTML files, or directory browsing) or Tomcat things work. Here are my configs: Worker.properties # The workers that your plugins should create and work with # worker.list=worker1 #------ DEFAULT ajp13 WORKER DEFINITION ------------------------------ #--------------------------------------------------------------------- # Defining a worker named ajp13 and of type ajp13 # Note that the name and the type do not have to match. # worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 URIWorkerMap.properties /|/*=worker1 # Exclude the subdirectory static: !/static|/*=worker1 # Exclude some suffixes: !*.html=worker1 !*.asp=worker1 Server.xml <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> http://localhost:8080 is working, I can view the apps and configure them there... I'm quite new to IIS 7, I used to work with IIS 6. Thanks in advance, Yvan

    Read the article

  • Why are my hard drives failing?

    - by WishCow
    I have a small Ubuntu server running at home, with 2 HDDs. There are two software raids (raid1) on the disks, managed by mdadm, which I believe is irrelevant, but mentioning it anyway. Both of the HDDs are Western Digital, and have been used for around 2 years, when one of them started making clicking noises, and died. I figured that maybe it's natural after 2 years, so I bought a new one, and resynced the raid arrays. After about a month, the other drive also died. I didn't get suspicious, since both drives have been bought at the same time, it's not that surprising to see both of them near each other, so I bought another one. So far, 2 old drives failed, and 2 brand new in the system. After one month, one of the new drives died. This is when it started getting suspicious. Since the PC was put together from some really old parts (think AthlonXP), I figured that maybe the motherboard's SATA controller is the culprit. Of course you can't switch parts easily in an old PC like this, so I bought a whole system, new MB, new CPU, new RAM. Took the just failed drive back, since it was under warranty, and got it replaced. So it is up to 2 failed drives from the old ones, and 1 failed drive from the new ones. No problems, for 1 month. After that errors were creeping up again in /var/log/messages, and mdadm was reporting raid array failures. I started tearing my hair out. Everything is new in the system, it's up to the third brand new HDD, it's simply not possible that all of the new drives that I bought were faulty. Let's see what is still common... the cables. Okay, long shot, let's replace the SATA cables. Take HDD back, smile to the guy at the counter and say that I'm really unlucky. He replaces the HDD. I come home, one month passes and one of HDDs fails, again. I'm not joking. Two of the brand new HDDs have failed. Maybe it's a bug in the OS. Let's see what the manufacturer's testing tool says. Download testing tool, burn it to a CD, reboot, leave HDD testing overnight. Test says that the drive is faulty, and I should back up everything, if I still can. I don't know what's happening, but it does not look like a software problem, something is definitely thrashing the HDDs. I should mention now, that the whole system is in a shoebox. Since there are a load of "build your own ikea case" stuff, I thought there shouldn't be any problems throwing the thing in a box, and stuffing it away somewhere. The box is well ventilated, but I thought that just maybe the drives were overheating. There is no other possible answer to this. So I took the HDD back, and got it replaced (for the 3rd time), and bought HDD coolers. And just now, I have heard the sound of doom. click click whizzzzzzzzz. SSH into the box: You have new mail! mail r 1 DegradedArrayEvent on /dev/md0 ... dmesg output: [47128.000051] ata3: lost interrupt (Status 0x50) [47128.000097] end_request: I/O error, dev sda, sector 58588863 [47128.000134] md: super_written gets error=-5, uptodate=0 [48043.976054] ata3: lost interrupt (Status 0x50) [48043.976086] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [48043.976132] ata3.00: cmd c8/00:18:bf:40:52/00:00:00:00:00/e1 tag 0 dma 12288 in [48043.976135] res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [48043.976208] ata3.00: status: { DRDY } [48043.976241] ata3: soft resetting link [48044.148446] ata3.00: configured for UDMA/133 [48044.148457] ata3.00: device reported invalid CHS sector 0 [48044.148477] ata3: EH complete Recap: No possibility of overheating 6 drives have failed, 4 of those have been brand new. I'm not sure now that the original two have been faulty, or suffered the same thing that the new ones. There is nothing common in the system, apart from the OS which is Ubuntu Karmic now (started with Jaunty). New MB, new CPU, new RAM, new SATA cables. No, the little holes on the HDD are not covered I'm crying. Really. I don't have the face to return to the store now, it's not possible for 4 drives to fail under 4 months. A few ideas that I have been thinking: Is it possible that I fuck up something when I partition and resync the drives? Can it be so bad that it physicaly wrecks the drive? (since the vendor supplied tool says that the drive is damaged) I do the partitoning with fdisk, and use the same block size for the raid1 partitions (I check the exact block sizes with fdisk -lu) Is it possible that the linux kernel or mdadm, or something is not compatible with this exact brand of HDDs, and thrashes them? Is it possible that it may be the shoebox? Try placing it somewhere else? It's under a shelf now, so humidity is not a problem either. Is it possible that a normal PC case will solve my problem (I'm going to shoot myself then)? I will get a picture tomorrow. Am I just simply cursed? Any help or speculation is greatly appreciated. Edit: The power strip is guarded against overvoltage. Edit2: I have moved inbetween these 4 months, so the possibility of the cause being "dirty" electricity in both places, is very low. Edit3: I have checked the voltages in the BIOS (couldn't borrow a multimeter), and they are all seem correct, the biggest discrepancy is in the 12V, because it's supplying 11.3. Should I be worried about that? Edit4: I put my desktop PC's PSU into the server. The BIOS reported much more accurate voltage readings, and also it has successfully rebuilt the raid1 array, which took some 3-4 hours, so I feel a little positive now. Will get a new PSU tomorrow to test with that. Also, attaching the picture about the box: (disregard the 3rd drive)

    Read the article

  • cannot access localhost using ip

    - by Robert
    I have done a small web development project using eclipse. It runs well when I try running it on browser with url localhost:8080/myproject/home.html. But if I want to access it on another machine (laptop, mobile, etc. using the same wifi) it is not possible; it is not able to connect. After Googling for a while found out that I have to use the IP address instead of 'localhost'. So I tried 10.0.0.4:8080/myproject/home.html, but still does not work. In fact i am unable to open that url on the same machine (where localhost:8080/myproject/home.html works fine). I also added a new Inbound rule in control panel firewall settings, allowing access to all ports for protocol TCP. Still have problem in running application with the url 10.0.0.4:8080/myproject/home.html (both on same machine as well as laptop and mobile). FYI i am using Eclipse Indigo, Apache tomcat 6.0 and server.xml file contents is as below: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --><!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --><Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener SSLEngine="on" className="org.apache.catalina.core.AprLifecycleListener"/> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener"/> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"/> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" address="10.0.0.4" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine defaultHost="localhost" name="Catalina"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <Context docBase="myproject" path="/myproject" reloadable="true" source="org.eclipse.jst.jee.server:myproject"/></Host> </Engine> </Service> </Server>

    Read the article

  • MSAcpi_ThermalZoneTemperature class not showing actual temperature

    - by jchoudhury
    i want to fetch CPU Performance data in real time including temperature. i used the following code to get CPU Temperature: try { ManagementObjectSearcher searcher = new ManagementObjectSearcher("root\\WMI", "SELECT * FROM MSAcpi_ThermalZoneTemperature"); foreach (ManagementObject queryObj in searcher.Get()) { double temp = Convert.ToDouble(queryObj["CurrentTemperature"].ToString()); double temp_critical = Convert.ToDouble(queryObj["CriticalTripPoint"].ToString()); double temp_cel = (temp/10 - 273.15); double temp_critical_cel = temp_critical / 10 - 273.15; lblCurrentTemp.Text = temp_cel.ToString(); lblCriticalTemp.Text = temp_critical_cel.ToString(); } } catch (ManagementException e) { MessageBox.Show("An error occurred while querying for WMI data: " + e.Message); } but this code shows the temperature that is not the correct temperature. It ususally shows 49.5-50.5 degrees centigrade. But I used "OpenHardwareMonitor" that report CPU temperature over 71 degree centigrade and changing fractions along with time fractions. is there anything I am missing in the code? I used the above code in timer_click event for every 500ms interval to refresh the temperature reading but it's always showing the same temperature from the beginning of execution. That implies if you run this application and if it shows 49 degree then after 1 hour session, it'll constantly show 49 degree. Where is the problem? please help. Thanks in advance.

    Read the article

  • faster implementation of sum ( for Codility test )

    - by Oscar Reyes
    How can the following simple implementation of sum be faster? private long sum( int [] a, int begin, int end ) { if( a == null ) { return 0; } long r = 0; for( int i = begin ; i < end ; i++ ) { r+= a[i]; } return r; } EDIT Background is in order. Reading latest entry on coding horror, I came to this site: http://codility.com which has this interesting programming test. Anyway, I got 60 out of 100 in my submission, and basically ( I think ) is because this implementation of sum, because those parts where I failed are the performance parts. I'm getting TIME_OUT_ERROR's So, I was wondering if an optimization in the algorithm is possible. So, no built in functions or assembly would be allowed. This my be done in C, C++, C#, Java or pretty much in any other. EDIT As usual, mmyers was right. I did profile the code and I saw most of the time was spent on that function, but I didn't understand why. So what I did was to throw away my implementation and start with a new one. This time I've got an optimal solution [ according to San Jacinto O(n) -see comments to MSN below - ] This time I've got 81% on Codility which I think is good enough. The problem is that I didn't take the 30 mins. but around 2 hrs. but I guess that leaves me still as a good programmer, for I could work on the problem until I found an optimal solution: Here's my result. I never understood what is those "combinations of..." nor how to test "extreme_first"

    Read the article

  • Is it OK to set "Cache-Control: public" when sending “304 Not Modified” for images stored in the dat

    - by Emilien
    After asking a question about sending “304 Not Modified” for images stored in the in the Google App Engine datastore, I now have a question about Cache-Control. My app now sends Last-Modified and Etag, but by default GAE alsto sends Cache-Control: no-cache. According to this page: The “no-cache” directive, according to the RFC, tells the browser that it should revalidate with the server before serving the page from the cache. [...] In practice, IE and Firefox have started treating the no-cache directive as if it instructs the browser not to even cache the page. As I DO want browsers to cache the image, I've added the following line to my code: self.response.headers['Cache-Control'] = "public" According to the same page as before: The “cache-control: public” directive [...] tells the browser and proxies [...] that the page may be cached. This is good for non-sensitive pages, as caching improves performance. The question is if this could be harmful to the application in some way? Would it be best to send Cache-Control: must-revalidate to "force" the browser to revalidate (I suppose that is the behavior that was originally the reason behind sending Cache-Control: no-cache) This directive insists that the browser must revalidate the page against the server before serving it from cache. Note that it implicitly lets the browser cache the page.

    Read the article

  • winforms databinding best practices

    - by Kaiser Soze
    Demands / problems: I would like to bind multiple properties of an entity to controls in a form. Some of which are read only from time to time (according to business logic). When using an entity that implements INotifyPropertyChanged as the DataSource, every change notification refreshes all the controls bound to that data source (easy to verify - just bind two properties to two controls and invoke a change notification on one of them, you will see that both properties are hit and reevaluated). There should be user friendly error notifications (the entity implements IDataErrorInfo). (probably using ErrorProvider) Using the entity as the DataSource of the controls leads to performance issues and makes life harder when its time for a control to be read only. I thought of creating some kind of wrapper that holds the entity and a specific property so that each control would be bound to a different DataSource. Moreover, that wrapper could hold the ReadOnly indicator for that property so the control would be bound directly to that value. The wrapper could look like this: interface IPropertyWrapper : INotifyPropertyChanged, IDataErrorInfo { object Value { get; set; } bool IsReadOnly { get; } } But this means also a different ErrorProvider for each property (property wrapper) I feel like I'm trying to reinvent the wheel... What is the 'proper' way of handling complex binding demands like these? Thanks ahead.

    Read the article

  • Rendering a UIWebView in drawRect with loadHTMLString

    - by Nick Weaver
    Hello there, I am having a problem with UIWebView. I'd like to render my own html code in it. When I add a webview as a subview and put in some html code it renders just fine. When it gets down to some optimized drawing of tableview cell with the drawRect method the problem pops up. Drawing UIView descendants works pretty well this way. It's even possible to load a URL with the loadRequest method, setting the delegate, conforming to the UIWebViewDelegate protocol and redrawing the table cell with setNeedsDisplay when webViewDidFinishLoad is called. It does show, but when it comes to loadHTMLString, nothing shows up, only a white rect. Due to performance reasons I have to do the drawing in the drawRect method. Any ideas? Thanks in advance Nick Example snippet code for the html code being loaded by a UIWebView: NSString *html = @"<html><head><title>My fancy webview</title></head><body style='background-color:green;'><p>It somehow seems<h2 style='color:black;'>this does not show up in drawRect</h2>!</p></body></html>"; [webView loadHTMLString:html baseURL:nil]; Snippet for the drawRect method: - (void)drawRect:(CGRect)aRect { CGContextRef context = UIGraphicsGetCurrentContext(); [[webView layer] renderInContext:context]; }

    Read the article

  • Why were namespaces removed from ECMAScript consideration?

    - by Bob
    Namespaces were once a consideration for ECMAScript (the old ECMAScript 4) but were taken out. As Brendan Eich says in this message: One of the use-cases for namespaces in ES4 was early binding (use namespace intrinsic), both for performance and for programmer comprehension -- no chance of runtime name binding disagreeing with any earlier binding. But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts. Plus, as some JS implementors have noted with concern, multiple open namespaces impose runtime cost unless an implementation works significantly harder. For these reasons, namespaces and early binding (like packages before them, this past April) must go. But I'm not sure I understand all of that. What exactly is a prioritization or reservation mechanism and why would either of those be needed? Also, must early binding and namespaces go hand-in-hand? For some reason I can't wrap my head around the issues involved. Can anyone attempt a more fleshed out explanation? Also, why would namespaces impose runtime costs? In my mind I can't help but see little difference in concept between a namespace and a function using closures. For instance, Yahoo and Google both have YAHOO and google objects that "act like" namespaces in that they contain all of their public and private variables, functions, and objects within a single access point. So why, then, would a namespace be so significantly different in implementation? Maybe I just have a misconception as to what a namespace is exactly.

    Read the article

  • How to use Android's CacheManager?

    - by punnie
    I'm currently developing an Android application that fetches images using http requests. It would be quite swell if I could cache those images in order to improve to performance and bandwidth use. I came across the CacheManager class in the Android reference, but I don't really know how to use it, or what it really does. I already scoped through this example, but I need some help understanding it: /core/java/android/webkit/gears/ApacheHttpRequestAndroid.java Also, the reference states: "Network requests are provided to this component and if they can not be resolved by the cache, the HTTP headers are attached, as appropriate, to the request for revalidation of content." I'm not sure what this means or how it would work for me, since CacheManager's getCacheFile accepts only a String URL and a Map containing the headers. Not sure what the attachment mentioned means. An explanation or a simple code example would really do my day. Thanks! Update Here's what I have right now. I am clearly doing it wrong, just don't know where. public static Bitmap getRemoteImage(String imageUrl) { URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); copyStream(is, cache_result.getOutputStream()); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream()); return bmp; }

    Read the article

  • Putting WPF Controls into canvas with Visuals

    - by Mikhail
    I am writing a WPF chart and use Visuals for performance. The code looks like: public class DrawingCanvas2 : Canvas { private List<Visual> _visuals = new List<Visual>(); private List<Visual> _hits = new List<Visual>(); protected override Visual GetVisualChild( int index ) { return _visuals[index]; } protected override int VisualChildrenCount { get { return _visuals.Count; } } public void AddVisual( Visual visual ) { _visuals.Add( visual ); base.AddVisualChild( visual ); base.AddLogicalChild( visual ); } } Beside DrawingVisual elements (line, text) I need a ComboBox in the chart. So I tried this: public DrawingCanvas2() { ComboBox box = new ComboBox(); AddVisual( box ); box.Width = 100; box.Height = 30; Canvas.SetLeft( box, 10 ); Canvas.SetTop( box, 10 ); } but it does not work, there is no ComboBox displayed. What I am missing?

    Read the article

  • Self hosted WCF ServiceHost/WebServiceHost Concurrency/Peformance Design Options (.NET 3.5)

    - by Kyle
    So I'll be providing a few functions via a self hosted (in a WindowsService) WebServiceHost (not sure how to process HTTP GET/POST with ServiceHost), one of which may be called a large amount of the time. This function will also rely on a connection in the appdomain (hosted by the WindowsService so it can stay alive over multiple requests). I have the following concerns and would be oh so thankful for any input/thoughts/comments: Concurrent access - how does the WebServiceHost handle a bunch of concurrent requests. Are they queued and processes sequentially or are new instances of the contracts automagically created? WebServiceHost - WindowsService communication - I need some form of communication from the WebServiceHost to the hosting WindowsService for things like requesting a new session if one does not exist. Perhaps implementing a class which extends the WebServiceHost with events which the WindowsService subscribes to... (unless there is another way I can set off an event in the WindowsService when a request is made...) Multiple WebServiceHosts or Contracts - Would it give any real performance gain to be running multiple WebServiceHost instances in different threads (one per endpoint perhaps?) - A better understanding of the first point would probably help here. WSDL - I'm not sure why (probably just need to do more reading), but I'm not sure how to get the WebServiceHost base endpoint to respond with a WDSL document describing the available contract. Not required as all the operations will be done via GET requests which will not likely change, but it would be nice to have... That's about it for the moment ;) I've been reading a lot on WCF and wish I'd gotten into it long ago, but definitely still learning.

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • Stresstesting ASP.NET/IIS with WCAT

    - by MartinHN
    I'm trying to setup a stress/load test using the WCAT toolkit included in the IIS Resources. Using LogParser, I've processed a UBR file with configuration. It looks something like this: [Configuration] NumClientMachines: 1 # number of distinct client machines to use NumClientThreads: 100 # number of threads per machine AsynchronousWait: TRUE # asynchronous wait for think and delay Duration: 5m # length of experiment (m = minutes, s = seconds) MaxRecvBuffer: 8192K # suggested maximum received buffer ThinkTime: 0s # maximum think-time before next request WarmupTime: 5s # time to warm up before taking statistics CooldownTime: 6s # time to cool down at the end of the experiment [Performance] [Script] SET RequestHeader = "Accept: */*\r\n" APP RequestHeader = "Accept-Language: en-us\r\n" APP RequestHeader = "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705)\r\n" APP RequestHeader = "Host: %HOST%\r\n" NEW TRANSACTION classId = 1 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 45117 verb = "GET" URL = "http://Url1.com" NEW TRANSACTION classId = 3 NEW REQUEST HTTP ResponseStatusCode = 200 Weight = 13662 verb = "GET" URL = "http://Url1.com/test.aspx" Does it look OK? I execute the controller with this command: wcctl -z StressTest.ubr -a localhost The Client(s) is executed like this: wcclient localhost When the client is executed, I get this error: main client thread Connect Attempt 0 Failed. Error = 10061 Has anyone in this world ever used WCAT?

    Read the article

  • problem with radgrid in pageindexchange event

    - by user203127
    Hi, I Have radgrid in my page. When i turn off view state and in pageindexchanged event when click next page i am getting nothing. Simply a blank page. But when i turn on view state I am getting data in next pages. Is there any way to get the data. I cant turn on the view state due to performance issue. Please see the code below for the reference. .aspx <telerik:RadGrid ID="RadGrid1" OnSortCommand="RadGrid1_SortCommand" OnPageIndexChanged="RadGrid1_PageIndexChanged" AllowSorting="True" PageSize="20" ShowGroupPanel="True" AllowPaging="True" AllowMultiRowSelection="True" AllowFilteringByColumn="true" AutoGenerateColumns="false" EnableViewState="false" runat="server" GridLines="None" OnItemUpdated="RadGrid1_ItemUpdated" OnDataBound="RadGrid1_DataBound"> aspx.cs public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { LoadData(); } private void LoadData() { SqlConnection SqlConn = new SqlConnection("uid=tempuser;password=tempuser;data source=USWASHL10015\\SQLEXPRESS;database=CCOM;"); SqlCommand cmd = new SqlCommand(); cmd.Connection = SqlConn; cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "usp_testing"; SqlDataAdapter da = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); da.Fill(ds); RadGrid1.DataSource = ds; RadGrid1.DataBind(); //RadGrid1.ClientSettings.AllowDragToGroup = true; } protected void RadGrid1_PageIndexChanged(object source, Telerik.Web.UI.GridPageChangedEventArgs e) { //RadGrid1.Rebind(); LoadData(); }

    Read the article

  • FileNotFoundException when running WiX CustomAction with COMAdmin interop

    - by dabcabc
    I am trying to create a WiX custom action which will allow me to shutdown and clear down a COM+ package as part of an upgrade installation, or create and configure a new COM+ package as part of the initial installation. I previously had this running as a CustomAction within a standard Visual Studio MSI but this only allows the custom action to be executed after the files have been copied - which will fail as the package will still be running. The COMAdmin.dll has been added as a reference to the CustomAction project and is set CopyLocal=true. In the bin folder for the custom action project the Interop.COMAdmin.dll is present. The answer to this question seems to suggest that it should work. I am getting the following exception within the MSI log when trying to install: MSI (s) (C4:04) [10:40:34:205]: Invoking remote custom action. DLL: C:\WINDOWS\Installer\MSI119.tmp, Entrypoint: BeforeInstall SFXCA: Extracting custom action to temporary directory: C:\WINDOWS\Installer\MSI119.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action MyCustomAction!MyCustomAction.CustomActions.BeforeInstall Exception thrown by custom action: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.FileNotFoundException: Could not load file or assembly 'Interop.COMAdmin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. File name: 'Interop.COMAdmin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' at MyCustomAction.CustomActions.BeforeInstall(Session session) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value . --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.RuntimeMethodHandle.InvokeMethodFast(Object target, Object arguments, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.InvokeCustomAction(Int32 sessionHandle, String entryPoint, IntPtr remotingDelegatePtr)

    Read the article

  • Javascript object dependencies

    - by Anurag
    In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases. For instance, there are 2 files: /modules/Module.js <requires Core.js> /modules/core/Core.js The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though. The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent: Module.js var Module = new Class({ Implements: Core, ... }); Core.js var Core = new Class({ ... }); What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?

    Read the article

  • MP3 Decoding on Android

    - by Rob Szumlakowski
    Hi. We're implementing a program for Android phones that plays audio streamed from the internet. Here's approximately what we do: Download a custom encrypted format. Decrypt to get chunks of regular MP3 data. Decode MP3 data to raw PCM data in a memory buffer. Pipe the raw PCM data to an AudioTrack Our target devices so far are Droid and Nexus One. Everything works great on Nexus One, but the MP3 decode is too slow on Droid. The audio playback starts to skip if we put the Droid under load. We are not permitted to decode the MP3 data to SD card, but I know that's not our problem anyways. We didn't write our own MP3 decoder, but used MPADEC (http://sourceforge.net/projects/mpadec/). It's free and was easy to integrate with our program. We compile it with the NDK. After exhaustive analysis with various profiling tools, we're convinced that it's this decoder that is falling behind. Here's the options we're thinking about: Find another MP3 decoder that we can compile with the Android NDK. This MP3 decoder would have to be either optimized to run on mobile ARM devices or maybe use integer-only math or some other optimizations to increase performance. Since the built-in Android MediaPlayer service will take URLs, we might be able to implement a tiny HTTP server in our program and serve the MediaPlayer with the decrypted MP3s. That way we can take advantage of the built-in MP3 decoder. Get access to the built-in MP3 decoder through the NDK. I don't know if this is possible. Does anyone have any suggestions on what we can do to speed up our MP3 decoding? -- Rob Sz

    Read the article

  • Dealing with huge SQL resultset

    - by Dave McClelland
    I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory. For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable. My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time? Cheers, Dave McClelland

    Read the article

  • IDataRecord.IsDBNull causes an System.OverflowException (Arithmetic Overflow)

    - by Ciddan
    Hi! I have a OdbcDataReader that gets data from a database and returns a set of records. The code that executes the query looks as follows: OdbcDataReader reader = command.ExecuteReader(); while (reader.Read()) { yield return reader.AsMovexProduct(); } The method returns an IEnumerable of a custom type (MovexProduct). The convertion from an IDataRecord to my custom type MovexProduct happens in an extension-method that looks like this (abbrev.): public static MovexProduct AsMovexProduct(this IDataRecord record) { var movexProduct = new MovexProduct { ItemNumber = record.GetString(0).Trim(), Name = record.GetString(1).Trim(), Category = record.GetString(2).Trim(), ItemType = record.GetString(3).Trim() }; if (!record.IsDBNull(4)) movexProduct.Status1 = int.Parse(record.GetString(4).Trim()); // Additional properties with IsDBNull checks follow here. return movexProduct; } As soon as I hit the if (!record.IsDBNull(4)) I get an OverflowException with the exception message "Arithmetic operation resulted in an overflow." StackTrace: System.OverflowException was unhandled by user code Message=Arithmetic operation resulted in an overflow. Source=System.Data StackTrace: at System.Data.Odbc.OdbcDataReader.GetSqlType(Int32 i) at System.Data.Odbc.OdbcDataReader.GetValue(Int32 i) at System.Data.Odbc.OdbcDataReader.IsDBNull(Int32 i) at JulaAil.DataService.Movex.Data.ExtensionMethods.AsMovexProduct(IDataRecord record) [...] I've never encountered this problem before and I cannot figure out why I get it. I have verified that the record exists and that it contains data and that the indexes I provide are correct. I should also mention that I get the same exception if I change the if-statemnt to this: if (record.GetString(4) != null). What does work is encapsulating the property-assignment in a try {} catch (NullReferenceException) {} block - but that can lead to performance-loss (can it not?). I am running the x64 version of Visual Studio and I'm using a 64-bit odbc driver. Has anyone else come across this? Any suggestions as to how I could solve / get around this issue? Many thanks!

    Read the article

  • Windows services with windows forms in the same process

    - by andrecarlucci
    Hello, I have a c# application that runs as a windows service controlling socket connections and other things. Also, there is another windows forms application to control and configure this service (systray with start, stop, show form with configuration parameters). I'm using .net remoting to do the IPC and that was fine, but now I want to show some real traffic and other reports and remoting will not meet my performance requirements. So I want to combine both applications in one. Here is the problem: When I started the form from the windows service, nothing happened. Googling around I've found that I have to right click the service, go to Log on and check the "Allow service to interact with desktop" option. Since I don't want to ask my users to do that, I got some code googling again to set this option in the user's regedit during installation time. The problem is that even setting this option, it doesn't work. I have to open the Log On options of the service (it is checked), uncheck and check again. So, how to solve that? How is the best way to have a windows service with a systray control in the same process, available to any user logging in? UPDATE: Thanks for the comments so far, guys. I agree it is better to use IPC and I know that it is bad to mix windows services and user interfaces. Even though, I want to know how to do that.

    Read the article

< Previous Page | 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072  | Next Page >