Search Results

Search found 25554 results on 1023 pages for 'oracle solaris 11 express'.

Page 546/1023 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • SAPPHIRE HD 7770 no audio on HDMI TV display

    - by zeroconf
    I have SAPPHIRE HD 7770 and cannot get work audio over HDMI. http://www.sapphiretech.com/presentation/product/?cid=1&gid=3&sgid=1159&lid=1&pid=1452&leg=0 I use Ubuntu 12.04 LTS 64-bit version with all current updates. I tried at /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.audio=1" ... it didn't help. It's probably I use proprietary driver -this seems to be open source driver. I use the driver, what jockey-gtk (additional drivers) offered me: ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER <---- I installed that one ATI/AMD PROPRIETARY FGLRX GRAPHICS DRIVER (post-release update) So - I installed the first one, because installing second version failed. Everything went fine but no sound at TV display by HDMI. Even Gnome sound mixer doesn't show HDMI choice. Using 32" Samsung B530 LCD TV - http://www.lcdbesttv.com/2010/02/samsung-b530-series-lcd-tv/ I have Asus P8Z77-M motherboard - http://www.asus.com/Motherboards/Intel_Socket_1155/P8Z77M/ - there is also HDMI integrated. When I put HDMI cord to that plug, then even Gnome sound mixer showed HDMI audio but it didn't work. I have set from BIOS, that I use that SAPPHIRE HD 7770 from PCIe. My lspci output: 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Ivy Bridge PCI Express Root Port (rev 09) 00:02.0 Display controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) 00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Device 683d 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Device aab0 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 09) 04:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • Ubuntu 12.10 graphics does not work properly

    - by madox2
    My graphic on ubuntu 12.10 does not work as well as on 12.04. After upgrade I installed driver sudo apt-add-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current for my Nvidia 450 GTS graphics card. But sometimes I see slight lag on my videos played in VLC player, some of desktop and window effects are lagging, sometimes I can see an indescribable souce of pixels on my screen at the start of ubuntu and so on. I feel difference between 12.04 and 12.10 in favour of former version. Does anyone know whats wrong or what I am missing? here is output of lspci -k: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) Kernel driver in use: pcieport Kernel modules: shpchp 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: Giga-byte Technology Device 1c3a Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Giga-byte Technology Device a000 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: Giga-byte Technology Device 5006 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation H61 Express Chipset Family LPC Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 4 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: Giga-byte Technology Device 5001 Kernel modules: i2c-i801 00:1f.5 IDE interface: Intel Corporation 6 Series/C200 Series Chipset Family 2 port SATA IDE Controller (rev 05) Subsystem: Giga-byte Technology Device b002 Kernel driver in use: ata_piix 01:00.0 VGA compatible controller: NVIDIA Corporation GF116 [GeForce GTS 450] (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: nvidia Kernel modules: nvidia_current, nouveau, nvidiafb 01:00.1 Audio device: NVIDIA Corporation GF116 High Definition Audio Controller (rev a1) Subsystem: CardExpert Technology Device 0401 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 03:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Subsystem: Giga-byte Technology Device e000 Kernel driver in use: atl1c Kernel modules: atl1c

    Read the article

  • Learning Issued Token in Federated Service

    - by Lijo
    I would like to learn federated WCF service. I have the following in my system. • Windows XP • Visual Studio 2010 Express • SQL Server 2008 Express Is it possible to create a federated service sample with this infrastructure? Is there any article for that? UPDATE Federation: http://msdn.microsoft.com/en-us/library/ms730908.aspx Federation Sample: http://msdn.microsoft.com/en-us/library/aa355045.aspx

    Read the article

  • Help: Visual Basic Setup Problems (26 replies)

    I picked up a book to learn Visual basic but cannot install it on my system after I download it from this site: http://www.microsoft.com/express/Windows/. I'm assuming my pc is the problem, but have no idea where to start. It seems to stat installing, but then I get a pop up that says: &quot;Microsoft Visual Basic 2008 Express Edition with SP1 ENU has encountered a problem during setup. Setup did not c...

    Read the article

  • Help: Visual Basic Setup Problems (26 replies)

    I picked up a book to learn Visual basic but cannot install it on my system after I download it from this site: http://www.microsoft.com/express/Windows/. I'm assuming my pc is the problem, but have no idea where to start. It seems to stat installing, but then I get a pop up that says: &quot;Microsoft Visual Basic 2008 Express Edition with SP1 ENU has encountered a problem during setup. Setup did not c...

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? To be clear: Within the windows/linux OS, all browsers work perfectly. Within mac only firefox works if it uses its internal certificate store and not the keychain. It's the keychain method of importing a certificate that causes the issue. Thus, all browsers using the keychain will not work. Root CA Cert: -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE----- Intermediate CA Cert: Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: C=*****, ST=*******, L=******, O=*******, CN=******/emailAddress=****** Validity Not Before: May 21 13:57:32 2014 GMT Not After : Jun 20 13:57:32 2014 GMT Subject: C=*****, ST=********, O=*******, CN=*******/emailAddress=******* Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (4096 bit) Modulus (4096 bit): 00:e7:2d:75:38:23:02:8e:b9:8d:2f:33:4c:2a:11: 6d:d4:f8:29:ab:f3:fc:12:00:0f:bb:34:ec:35:ed: a5:38:10:1e:f3:54:c2:69:ae:3b:22:c0:0d:00:97: 08:da:b9:c9:32:c0:c6:b1:8b:22:7e:53:ea:69:e2: 6d:0f:bd:f5:96:b2:d0:0d:b2:db:07:ba:f1:ce:53: 8a:5e:e0:22:ce:3e:36:ed:51:63:21:e7:45:ad:f9: 4d:9b:8f:7f:33:4c:ed:fc:a6:ac:16:70:f5:96:36: 37:c8:65:47:d1:d3:12:70:3e:8d:2f:fb:9f:94:e0: c9:5f:d0:8c:30:e0:04:23:38:22:e5:d9:84:15:b8: 31:e7:a7:28:51:b8:7f:01:49:fb:88:e9:6c:93:0e: 63:eb:66:2b:b4:a0:f0:31:33:8b:b4:04:84:1f:9e: d5:ed:23:cc:bf:9b:8e:be:9a:5c:03:d6:4f:1a:6f: 2d:8f:47:60:6c:89:c5:f0:06:df:ac:cb:26:f8:1a: 48:52:5e:51:a0:47:6a:30:e8:bc:88:8b:fd:bb:6b: c9:03:db:c2:46:86:c0:c5:a5:45:5b:a9:a3:61:35: 37:e9:fc:a1:7b:ae:71:3a:5c:9c:52:84:dd:b2:86: b3:2e:2e:7a:5b:e1:40:34:4a:46:f0:f8:43:26:58: 30:87:f9:c6:c9:bc:b4:73:8b:fc:08:13:33:cc:d0: b7:8a:31:e9:38:a3:a9:cc:01:e2:d4:c2:a5:c1:55: 52:72:52:2b:06:a3:36:30:0c:5c:29:1a:dd:14:93: 2b:9d:bf:ac:c1:2d:cd:3f:89:1f:bc:ad:a4:f2:bd: 81:77:a9:f4:f0:b9:50:9e:fb:f5:da:ee:4e:b7:66: e5:ab:d1:00:74:29:6f:01:28:32:ea:7d:3f:b3:d7: 97:f2:60:63:41:0f:30:6a:aa:74:f4:63:4f:26:7b: 71:ed:57:f1:d4:99:72:61:f4:69:ad:31:82:76:67: 21:e1:32:2f:e8:46:d3:28:61:b1:10:df:4c:02:e5: d3:cc:22:30:a4:bb:81:10:dc:7d:49:94:b2:02:2d: 96:7f:e5:61:fa:6b:bd:22:21:55:97:82:18:4e:b5: a0:67:2b:57:93:1c:ef:e5:d2:fb:52:79:95:13:11: 20:06:8c:fb:e7:0b:fd:96:08:eb:17:e6:5b:b5:a0: 8d:dd:22:63:99:af:ad:ce:8c:76:14:9a:31:55:d7: 95:ea:ff:10:6f:7c:9c:21:00:5e:be:df:b0:87:75: 5d:a6:87:ca:18:94:e7:6a:15:fe:27:dd:28:5e:c0: ad:d2:91:d3:2d:8e:c3:c0:9f:fb:ff:c0:36:7e:e2: d7:bc:41 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, DNS:dropbox.com, DNS:*.dropbox.com, DNS:filedropper.com, DNS:*.filedropper.com X509v3 Subject Key Identifier: F3:E5:38:5B:3C:AF:1C:73:C1:4C:7D:8B:C8:A1:03:82:65:0D:FF:45 X509v3 Authority Key Identifier: keyid:2B:37:39:7B:9F:45:14:FE:F8:BC:CA:E0:6E:B4:5F:D6:1A:2B:D7:B0 DirName:/C=****/ST=******/L=*******/O=*******/CN=******/emailAddress=******* serial:EE:8C:A3:B4:40:90:B0:62 X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha1WithRSAEncryption 46:2a:2c:e0:66:e3:fa:c6:80:b6:81:e7:db:c3:29:ab:e7:1c: f0:d9:a0:b7:a9:57:8c:81:3e:30:8f:7d:ef:f7:ed:3c:5f:1e: a5:f6:ae:09:ab:5e:63:b4:f6:d6:b6:ac:1c:a0:ec:10:19:ce: dd:5a:62:06:b4:88:5a:57:26:81:8e:38:b9:0f:26:cd:d9:36: 83:52:ec:df:f4:63:ce:a1:ba:d4:1c:ec:b6:66:ed:f0:32:0e: 25:87:79:fa:95:ee:0f:a0:c6:2d:8f:e9:fb:11:de:cf:26:fa: 59:fa:bd:0b:74:76:a6:5d:41:0d:cd:35:4e:ca:80:58:2a:a8: 5d:e4:d8:cf:ef:92:8d:52:f9:f2:bf:65:50:da:a8:10:1b:5e: 50:a7:7e:57:7b:94:7f:5c:74:2e:80:ae:1e:24:5f:0b:7b:7e: 19:b6:b5:bd:9d:46:5a:e8:47:43:aa:51:b3:4b:3f:12:df:7f: ef:65:21:85:c2:f6:83:84:d0:8d:8b:d9:6d:a8:f9:11:d4:65: 7d:8f:28:22:3c:34:bb:99:4e:14:89:45:a4:62:ed:52:b1:64: 9a:fd:08:cd:ff:ca:9e:3b:51:81:33:e6:37:aa:cb:76:01:90: d1:39:6f:6a:8b:2d:f5:07:f8:f4:2a:ce:01:37:ba:4b:7f:d4: 62:d7:d6:66:b8:78:ad:0b:23:b6:2e:b0:9a:fc:0f:8c:4c:29: 86:a0:bc:33:71:e5:7f:aa:3e:0e:ca:02:e1:f6:88:f0:ff:a2: 04:5a:f5:d7:fe:7d:49:0a:d2:63:9c:24:ed:02:c7:4d:63:e6: 0c:e1:04:cd:a4:bf:a8:31:d3:10:db:b4:71:48:f7:1a:1b:d9: eb:a7:2e:26:00:38:bd:a8:96:b4:83:09:c9:3d:79:90:e1:61: 2c:fc:a0:2c:6b:7d:46:a8:d7:17:7f:ae:60:79:c1:b6:5c:f9: 3c:84:64:7b:7f:db:e9:f1:55:04:6e:b5:d3:5e:d3:e3:13:29: 3f:0b:03:f2:d7:a8:30:02:e1:12:f4:ae:61:6f:f5:4b:e9:ed: 1d:33:af:cd:9b:43:42:35:1a:d4:f6:b9:fb:bf:c9:8d:6c:30: 25:33:43:49:32:43:a5:a8:d8:82:ef:b0:a6:bd:8b:fb:b6:ed: 72:fd:9a:8f:00:3b:97:a3:35:a4:ad:26:2f:a9:7d:74:08:82: 26:71:40:f9:9b:01:14:2e:82:fb:2f:c0:11:51:00:51:07:f9: e1:f6:1f:13:6e:03:ee:d7:85:c2:64:ce:54:3f:15:d4:d7:92: 5f:87:aa:1e:b4:df:51:77:12:04:d2:a5:59:b3:26:87:79:ce: ee:be:60:4e:87:20:5c:7f -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE-----

    Read the article

  • High Server Load cannot figure out why

    - by Tim Bolton
    My server is currently running CentOS 5.2, with WHM 11.34. Currently, we're at 6.43 to 12 for a load average. The sites that we're hosting are taking a lot time to respond and resolve. top doesn't show anything out of the ordinary and iftop doesn't show a lot of traffic. We have many resellers, and some not so good at writing code, how can we find the culprit? vmstat output: vmstat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 2 84 78684 154916 1021080 0 0 72 274 0 14 6 3 80 12 0 top output (ordered by %CPU) top - 21:44:43 up 5 days, 10:39, 3 users, load average: 3.36, 4.18, 4.73 Tasks: 222 total, 3 running, 219 sleeping, 0 stopped, 0 zombie Cpu(s): 5.8%us, 2.3%sy, 0.2%ni, 79.6%id, 11.8%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 2074580k total, 1863044k used, 211536k free, 174828k buffers Swap: 2040212k total, 84k used, 2040128k free, 987604k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15930 mysql 15 0 138m 46m 4380 S 4 2.3 1:45.87 mysqld 21772 igniteth 17 0 23200 7152 3932 R 4 0.3 0:00.02 php 1586 root 10 -5 0 0 0 S 2 0.0 11:45.19 kjournald 21759 root 15 0 2416 1024 732 R 2 0.0 0:00.01 top 1 root 15 0 2156 648 560 S 0 0.0 0:26.31 init 2 root RT 0 0 0 0 S 0 0.0 0:00.35 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:00.32 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:02.00 migration/1 6 root 34 19 0 0 0 S 0 0.0 0:00.11 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root RT 0 0 0 0 S 0 0.0 0:01.29 migration/2 9 root 34 19 0 0 0 S 0 0.0 0:00.26 ksoftirqd/2 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2 11 root RT 0 0 0 0 S 0 0.0 0:00.90 migration/3 12 root 34 19 0 0 0 R 0 0.0 0:00.20 ksoftirqd/3 13 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/3 top output (ordered by CPU time) top - 21:46:12 up 5 days, 10:41, 3 users, load average: 2.88, 3.82, 4.55 Tasks: 217 total, 1 running, 216 sleeping, 0 stopped, 0 zombie Cpu(s): 3.7%us, 2.0%sy, 2.0%ni, 67.2%id, 25.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2074580k total, 1959516k used, 115064k free, 183116k buffers Swap: 2040212k total, 84k used, 2040128k free, 1090308k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 32367 root 16 0 215m 212m 1548 S 0 10.5 62:03.63 62:03 tailwatchd 1586 root 10 -5 0 0 0 S 0 0.0 11:45.27 11:45 kjournald 1576 root 10 -5 0 0 0 S 0 0.0 2:37.86 2:37 kjournald 27722 root 16 0 2556 1184 800 S 0 0.1 1:48.94 1:48 top 15930 mysql 15 0 138m 46m 4380 S 4 2.3 1:48.63 1:48 mysqld 2932 root 34 19 0 0 0 S 0 0.0 1:41.05 1:41 kipmi0 226 root 10 -5 0 0 0 S 0 0.0 1:34.33 1:34 kswapd0 2671 named 25 0 74688 7400 2116 S 0 0.4 1:23.58 1:23 named 3229 root 15 0 10300 3348 2724 S 0 0.2 0:40.85 0:40 sshd 1580 root 10 -5 0 0 0 S 0 0.0 0:30.62 0:30 kjournald 1 root 17 0 2156 648 560 S 0 0.0 0:26.32 0:26 init 2616 root 15 0 1816 576 480 S 0 0.0 0:23.50 0:23 syslogd 1584 root 10 -5 0 0 0 S 0 0.0 0:18.67 0:18 kjournald 4342 root 34 19 27692 11m 2116 S 0 0.5 0:18.23 0:18 yum-updatesd 8044 bollingp 15 0 3456 2036 740 S 1 0.1 0:15.56 0:15 imapd 26 root 10 -5 0 0 0 S 0 0.0 0:14.18 0:14 kblockd/1 7989 gmailsit 16 0 3196 1748 736 S 0 0.1 0:10.43 0:10 imapd iostat -xtk 1 10 output [root@server1 tmp]# iostat -xtk 1 10 Linux 2.6.18-53.el5 12/18/2012 Time: 09:51:06 PM avg-cpu: %user %nice %system %iowait %steal %idle 5.83 0.19 2.53 11.85 0.00 79.60 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.37 118.83 18.70 54.27 131.47 692.72 22.59 4.90 67.19 3.10 22.59 sdb 0.35 39.33 20.33 61.43 158.79 403.22 13.75 5.23 63.93 3.77 30.80 Time: 09:51:07 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.50 0.00 0.50 24.00 0.00 74.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 25.00 2.00 2.00 128.00 108.00 118.00 0.03 7.25 4.00 1.60 sdb 0.00 16.00 41.00 145.00 200.00 668.00 9.33 107.92 272.72 5.38 100.10 Time: 09:51:08 PM avg-cpu: %user %nice %system %iowait %steal %idle 2.00 0.00 1.50 29.50 0.00 67.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 95.00 3.00 33.00 12.00 480.00 27.33 0.07 1.72 1.31 4.70 sdb 0.00 14.00 1.00 228.00 4.00 960.00 8.42 143.49 568.01 4.37 100.10 Time: 09:51:09 PM avg-cpu: %user %nice %system %iowait %steal %idle 13.28 0.00 2.76 21.30 0.00 62.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 21.00 1.00 19.00 16.00 192.00 20.80 0.06 3.55 1.30 2.60 sdb 0.00 36.00 28.00 181.00 124.00 884.00 9.65 121.16 617.31 4.79 100.10 Time: 09:51:10 PM avg-cpu: %user %nice %system %iowait %steal %idle 4.74 0.00 1.50 25.19 0.00 68.58 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 20.00 3.00 15.00 12.00 136.00 16.44 0.17 7.11 3.11 5.60 sdb 0.00 0.00 103.00 60.00 544.00 248.00 9.72 52.35 545.23 6.14 100.10 Time: 09:51:11 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.24 0.00 1.24 25.31 0.00 72.21 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 75.00 4.00 28.00 16.00 416.00 27.00 0.08 3.72 2.03 6.50 sdb 2.00 9.00 124.00 17.00 616.00 104.00 10.21 3.73 213.73 7.10 100.10 Time: 09:51:12 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.00 0.00 0.75 24.31 0.00 73.93 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 24.00 1.00 9.00 4.00 132.00 27.20 0.01 1.20 1.10 1.10 sdb 4.00 40.00 103.00 48.00 528.00 212.00 9.80 105.21 104.32 6.64 100.20 Time: 09:51:13 PM avg-cpu: %user %nice %system %iowait %steal %idle 2.50 0.00 1.75 23.25 0.00 72.50 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 125.74 3.96 46.53 15.84 689.11 27.92 0.20 4.06 2.41 12.18 sdb 2.97 0.00 91.09 84.16 419.80 471.29 10.17 85.85 590.78 5.66 99.11 Time: 09:51:14 PM avg-cpu: %user %nice %system %iowait %steal %idle 0.75 0.00 0.50 24.94 0.00 73.82 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 88.00 1.00 7.00 4.00 380.00 96.00 0.04 4.38 3.00 2.40 sdb 3.00 7.00 111.00 44.00 540.00 208.00 9.65 18.58 581.79 6.46 100.10 Time: 09:51:15 PM avg-cpu: %user %nice %system %iowait %steal %idle 11.03 0.00 3.26 26.57 0.00 59.15 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 145.00 7.00 53.00 28.00 792.00 27.33 0.15 2.50 1.55 9.30 sdb 1.00 0.00 155.00 0.00 800.00 0.00 10.32 2.85 18.63 6.46 100.10 [root@server1 tmp]# MySQL Show Full Processlist mysql> show full processlist; +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 1 | DB_USER_ONE | localhost | DB_ONE | Query | 3 | waiting for handler insert | INSERT DELAYED INTO defers (mailtime,msgid,email,transport_method,message,host,ip,router,deliveryuser,deliverydomain) VALUES(FROM_UNIXTIME('1355879748'),'1TivwL-0003y8-8l','[email protected]','remote_smtp','SMTP error from remote mail server after initial connection: host mx1.mail.tw.yahoo.com [203.188.197.119]: 421 4.7.0 [TS01] Messages from 75.125.90.146 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html','mx1.mail.tw.yahoo.com','203.188.197.119','lookuphost','','') | | 2 | DELAYED | localhost | DB_ONE | Delayed insert | 52 | insert | | | 3 | DELAYED | localhost | DB_ONE | Delayed insert | 68 | insert | | | 911 | DELAYED | localhost | DB_ONE | Delayed insert | 99 | Waiting for INSERT | | | 993 | DB_USER_TWO | localhost | DB_TWO | Sleep | 832 | | NULL | | 994 | DB_USER_ONE | localhost | DB_ONE | Query | 185 | Locked | delete from failures where FROM_UNIXTIME(UNIX_TIMESTAMP(NOW())-1296000) > mailtime | | 1102 | DB_USER_THREE | localhost | DB_THREE | Query | 29 | NULL | commit | | 1249 | DB_USER_FOUR | localhost | DB_FOUR | Query | 13 | NULL | commit | | 1263 | root | localhost | DB_FIVE | Query | 0 | NULL | show full processlist | | 1264 | DB_USER_SIX | localhost | DB_SIX | Query | 3 | NULL | commit | +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 10 rows in set (0.00 sec)

    Read the article

  • Puppet's automatically generated certificates failing

    - by gparent
    I am running a default configuration of Puppet on Debian Squeeze 6.0.4. The server's FQDN is master.example.com. The client's FQDN is client.example.com. I am able to contact the puppet master and send a CSR. I sign it using puppetca -sa but the client will still not connect. Date of both machines is within 2 seconds of Tue Apr 3 20:59:00 UTC 2012 as I wrote this sentence. This is what appears in /var/log/syslog: Apr 3 17:03:52 localhost puppet-agent[18653]: Reopening log files Apr 3 17:03:52 localhost puppet-agent[18653]: Starting Puppet client version 2.6.2 Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed Apr 3 17:03:53 localhost puppet-agent[18653]: Using cached catalog Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog; skipping run Here is some interesting output: OpenSSL client test: client:~# openssl s_client -host master.example.com -port 8140 -cert /var/lib/puppet/ssl/certs/client.example.com.pem -key /var/lib/puppet/ssl/private_keys/client.example.com.pem -CAfile /var/lib/puppet/ssl/certs/ca.pem CONNECTED(00000003) depth=1 /CN=Puppet CA: master.example.com verify return:1 depth=0 /CN=master.example.com verify error:num=7:certificate signature failure verify return:1 depth=0 /CN=master.example.com verify return:1 18509:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:s3_pkt.c:1102:SSL alert number 51 18509:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: client:~# master's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/master.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:28 2012 GMT Not After : Apr 2 20:01:28 2017 GMT Subject: CN=master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a9:c1:f9:4c:cd:0f:68:84:7b:f4:93:16:20:44: 7a:2b:05:8e:57:31:05:8e:9c:c8:08:68:73:71:39: c1:86:6a:59:93:6e:53:aa:43:11:83:5b:2d:8c:7d: 54:05:65:c1:e1:0e:94:4a:f0:86:58:c3:3d:4f:f3: 7d:bd:8e:29:58:a6:36:f4:3e:b2:61:ec:53:b5:38: 8e:84:ac:5f:a3:e3:8c:39:bd:cf:4f:3c:ff:a9:65: 09:66:3c:ba:10:14:69:d5:07:57:06:28:02:37:be: 03:82:fb:90:8b:7d:b3:a5:33:7b:9b:3a:42:51:12: b3:ac:dd:d5:58:69:a9:8a:ed Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 7b:2c:4f:c2:76:38:ab:03:7f:c6:54:d9:78:1d:ab:6c:45:ab: 47:02:c7:fd:45:4e:ab:b5:b6:d9:a7:df:44:72:55:0c:a5:d0: 86:58:14:ae:5f:6f:ea:87:4d:78:e4:39:4d:20:7e:3d:6d:e9: e2:5e:d7:c9:3c:27:43:a4:29:44:85:a1:63:df:2f:55:a9:6a: 72:46:d8:fb:c7:cc:ca:43:e7:e1:2c:fe:55:2a:0d:17:76:d4: e5:49:8b:85:9f:fa:0e:f6:cc:e8:28:3e:8b:47:b0:e1:02:f0: 3d:73:3e:99:65:3b:91:32:c5:ce:e4:86:21:b2:e0:b4:15:b5: 22:63 root@master:/etc/puppet# CA's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:05 2012 GMT Not After : Apr 2 20:01:05 2017 GMT Subject: CN=Puppet CA: master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b5:2c:3e:26:a3:ae:43:b8:ed:1e:ef:4d:a1:1e: 82:77:78:c2:98:3f:e2:e0:05:57:f0:8d:80:09:36: 62:be:6c:1a:21:43:59:1d:e9:b9:4d:e0:9c:fa:09: aa:12:a1:82:58:fc:47:31:ed:ad:ad:73:01:26:97: ef:d2:d6:41:6b:85:3b:af:70:00:b9:63:e9:1b:c3: ce:57:6d:95:0e:a6:d2:64:bd:1f:2c:1f:5c:26:8e: 02:fd:d3:28:9e:e9:8f:bc:46:bb:dd:25:db:39:57: 81:ed:e5:c8:1f:3d:ca:39:cf:e7:f3:63:75:f6:15: 1f:d4:71:56:ed:84:50:fb:5d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C Signature Algorithm: sha1WithRSAEncryption 1d:cd:c6:65:32:42:a5:01:62:46:87:10:da:74:7e:8b:c8:c9: 86:32:9e:c2:2e:c1:fd:00:79:f0:ef:d8:73:dd:7e:1b:1a:3f: cc:64:da:a3:38:ad:49:4e:c8:4d:e3:09:ba:bc:66:f2:6f:63: 9a:48:19:2d:27:5b:1d:2a:69:bf:4f:f4:e0:67:5e:66:84:30: e5:85:f4:49:6e:d0:92:ae:66:77:50:cf:45:c0:29:b2:64:87: 12:09:d3:10:4d:91:b6:f3:63:c4:26:b3:fa:94:2b:96:18:1f: 9b:a9:53:74:de:9c:73:a4:3a:8d:bf:fa:9c:c0:42:9d:78:49: 4d:70 root@master:/etc/puppet# Client's certificate: client:~# openssl x509 -text -noout -in /var/lib/puppet/ssl/certs/client.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:36 2012 GMT Not After : Apr 2 20:01:36 2017 GMT Subject: CN=client.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:ae:88:6d:9b:e3:b1:fc:47:07:d6:bf:ea:53:d1: 14:14:9b:35:e6:70:43:e0:58:35:76:ac:c5:9d:86: 02:fd:77:28:fc:93:34:65:9d:dd:0b:ea:21:14:4d: 8a:95:2e:28:c9:a5:8d:a2:2c:0e:1c:a0:4c:fa:03: e5:aa:d3:97:98:05:59:3c:82:a9:7c:0e:e9:df:fd: 48:81:dc:33:dc:88:e9:09:e4:19:d6:e4:7b:92:33: 31:73:e4:f2:9c:42:75:b2:e1:9f:d9:49:8c:a7:eb: fa:7d:cb:62:22:90:1c:37:3a:40:95:a7:a0:3b:ad: 8e:12:7c:6e:ad:04:94:ed:47 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 33:1f:ec:3c:91:5a:eb:c6:03:5f:a1:58:60:c3:41:ed:1f:fe: cb:b2:40:11:63:4d:ba:18:8a:8b:62:ba:ab:61:f5:a0:6c:0e: 8a:20:56:7b:10:a1:f9:1d:51:49:af:70:3a:05:f9:27:4a:25: d4:e6:88:26:f7:26:e0:20:30:2a:20:1d:c4:d3:26:f1:99:cf: 47:2e:73:90:bd:9c:88:bf:67:9e:dd:7c:0e:3a:86:6b:0b:8d: 39:0f:db:66:c0:b6:20:c3:34:84:0e:d8:3b:fc:1c:a8:6c:6c: b1:19:76:65:e6:22:3c:bf:ff:1c:74:bb:62:a0:46:02:95:fa: 83:41 client:~#

    Read the article

  • Why my VPN doesn't work anymore?

    - by xx77aBs
    I have openvpn server running on debian lenny. There is only one client - and it is running Windows 7 64-bit. This has worked for few months without any problems. And now, let's say for the last 7 days, it doesn't work at all. I connect successfully from client to the server, but I can't access anything through VPN. I have set it up so that all internet traffic is routed through VPN, and now when I connect with the client, the client can't do anything on the net (open any webpage, ping google, anything ...). Can you help me to figure out what's wrong ? I don't know where to start. I've also tried to connect to another openvpn server (I've installed and configured openvpn on another server, and when I try to connect to it result is the same). So I think there's something wrong with client ... Here is my connection log: Wed Apr 04 21:35:59 2012 OpenVPN 2.3-alpha1 Win32-MSVC++ [SSL (OpenSSL)] [LZO2] [PF_INET6] [IPv6 payload 20110522-1 (2.2.0)] built on Feb 21 2012 Enter Management Password: Wed Apr 04 21:35:59 2012 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.10:25340 Wed Apr 04 21:35:59 2012 Need hold release from management interface, waiting... Wed Apr 04 21:36:00 2012 MANAGEMENT: Client connected from [AF_INET]127.0.0.10:25340 Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'state on' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'log all on' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'hold off' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'hold release' Wed Apr 04 21:36:00 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Wed Apr 04 21:36:00 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Apr 04 21:36:00 2012 Socket Buffers: R=[8192->8192] S=[8192->8192] Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,RESOLVE,,, Wed Apr 04 21:36:00 2012 UDPv4 link local: [undef] Wed Apr 04 21:36:00 2012 UDPv4 link remote: [AF_INET]11.22.33.44:1234 Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,WAIT,,, Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,AUTH,,, Wed Apr 04 21:36:00 2012 TLS: Initial packet from [AF_INET]11.22.33.44:1234, sid=ee329574 f15e9e04 Wed Apr 04 21:36:00 2012 VERIFY OK: depth=1, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, CN=Fort-Funston CA, [email protected] Wed Apr 04 21:36:00 2012 VERIFY OK: depth=0, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, CN=server_key, [email protected] Wed Apr 04 21:36:01 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Apr 04 21:36:01 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Apr 04 21:36:01 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Apr 04 21:36:01 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Apr 04 21:36:01 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Apr 04 21:36:01 2012 [server_key] Peer Connection Initiated with [AF_INET]11.22.33.44:1234 Wed Apr 04 21:36:02 2012 MANAGEMENT: >STATE:1333568162,GET_CONFIG,,, Wed Apr 04 21:36:03 2012 SENT CONTROL [server_key]: 'PUSH_REQUEST' (status=1) Wed Apr 04 21:36:03 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 172.16.100.1,topology net30,ping 10,ping-restart 120,ifconfig 172.16.100.6 172.16.100.5' Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: timers and/or timeouts modified Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: --ifconfig/up options modified Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: route options modified Wed Apr 04 21:36:03 2012 ROUTE_GATEWAY 192.168.1.1/255.255.255.0 I=15 HWADDR=00:1f:1f:3f:61:55 Wed Apr 04 21:36:03 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Wed Apr 04 21:36:03 2012 MANAGEMENT: >STATE:1333568163,ASSIGN_IP,,172.16.100.6, Wed Apr 04 21:36:03 2012 open_tun, tt->ipv6=0 Wed Apr 04 21:36:03 2012 TAP-WIN32 device [VPN] opened: \\.\Global\{E28FD52B-F6C3-4094-A36A-30CB02FAC7E8}.tap Wed Apr 04 21:36:03 2012 TAP-Win32 Driver Version 9.9 Wed Apr 04 21:36:03 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 172.16.100.6/255.255.255.252 on interface {E28FD52B-F6C3-4094-A36A-30CB02FAC7E8} [DHCP-serv: 172.16.100.5, lease-time: 31536000] Wed Apr 04 21:36:03 2012 Successful ARP Flush on interface [31] {E28FD52B-F6C3-4094-A36A-30CB02FAC7E8} Wed Apr 04 21:36:08 2012 TEST ROUTES: 2/2 succeeded len=1 ret=1 a=0 u/d=up Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 11.22.33.44 MASK 255.255.255.255 192.168.1.1 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=25 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 0.0.0.0 MASK 128.0.0.0 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 128.0.0.0 MASK 128.0.0.0 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 MANAGEMENT: >STATE:1333568168,ADD_ROUTES,,, Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 172.16.100.1 MASK 255.255.255.255 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 Initialization Sequence Completed Wed Apr 04 21:36:08 2012 MANAGEMENT: >STATE:1333568168,CONNECTED,SUCCESS,172.16.100.6,11.22.33.44 Client's route table after connection with OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.41 281 0.0.0.0 128.0.0.0 172.16.100.1 172.16.100.6 31 94.23.53.45 255.255.255.255 192.168.1.1 192.168.1.41 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 172.16.100.1 172.16.100.6 31 172.16.100.4 255.255.255.252 On-link 172.16.100.6 286 172.16.100.6 255.255.255.255 On-link 172.16.100.6 286 172.16.100.7 255.255.255.255 On-link 172.16.100.6 286 192.168.1.0 255.255.255.0 On-link 192.168.1.41 281 192.168.1.41 255.255.255.255 On-link 192.168.1.41 281 192.168.1.255 255.255.255.255 On-link 192.168.1.41 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.41 281 224.0.0.0 240.0.0.0 On-link 172.16.100.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.41 281 255.255.255.255 255.255.255.255 On-link 172.16.100.6 286 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.1.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 13 58 ::/0 On-link 1 306 ::1/128 On-link 13 58 2001::/32 On-link 13 306 2001:0:5ef5:79fd:3cc3:6b9:ac7c:14db/128 On-link 15 281 fe80::/64 On-link 31 286 fe80::/64 On-link 13 306 fe80::/64 On-link 13 306 fe80::3cc3:6b9:ac7c:14db/128 On-link 31 286 fe80::7d72:9515:7213:35e3/128 On-link 15 281 fe80::9cec:ce3f:89de:a123/128 On-link 1 306 ff00::/8 On-link 13 306 ff00::/8 On-link 15 281 ff00::/8 On-link 31 286 ff00::/8 On-link =========================================================================== Persistent Routes: None

    Read the article

  • How to install missing Sound Drivers in Ubuntu?

    - by Sakamoto Kazuma
    I seem to be missing drivers for my Gateway laptop MA7. I have looked in System-Admin-Hardware Drivers, but it does not show up in there.There are also no devices listed in Sound-Hardware. I'm guessing at this point that I don't have the driver installed. However, I get the following output: admin@machine001:~$ cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xd8240000 irq 22 admin@machine001:~$ And my lspci shows: 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 02) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 02) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA AHCI Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 02) 02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8038 PCI-E Fast Ethernet Controller (rev 14) 03:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection (rev 02) 04:09.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 04:09.1 FireWire (IEEE 1394): Texas Instruments PCIxx12 OHCI Compliant IEEE 1394 Host Controller 04:09.2 Mass storage controller: Texas Instruments 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) I have also checked alsamixer, and nothing is muted. No headphones plugged into headphone jack either. So the question now is, how do I get sound to work on my laptop? It doesn't work for any application.

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • How to resolve java.lang.IllegalStateException?

    - by Roman Kagan
    We are using OC4J 10.1.3.5 and ADF. I have a popup form and when closing we got error below. I wonder what am I missing and how can I resolve it? Jun 15, 2010 8:26:49 AM com.sun.faces.lifecycle.ApplyRequestValuesPhase execute SEVERE: java.lang.IllegalStateException: popView(): No view has been pushed. javax.faces.el.EvaluationException: java.lang.IllegalStateException: popView(): No view has been pushed. at com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:150) at oracle.adf.view.faces.component.UIXComponentBase.__broadcast(UIXComponentBase.java:1087) at oracle.adf.view.faces.component.UIXCommand.broadcast(UIXCommand.java:204) at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:269) at javax.faces.component.UIViewRoot.processDecodes(UIViewRoot.java:327) at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:99) at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:245) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:110) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:213) at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64) at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl._invokeDoFilter(AdfFacesFilterImpl.java:233) at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl._doFilterImpl(AdfFacesFilterImpl.java:202) at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl.doFilter(AdfFacesFilterImpl.java:12 3)

    Read the article

  • Tutorial: Criando um Componente para o UCM

    - by Denisd
    Então você já instalou o UCM, seguindo o tutorial: http://blogs.oracle.com/ecmbrasil/2009/05/tutorial_de_instalao_do_ucm.html e também já fez o hands-on: http://blogs.oracle.com/ecmbrasil/2009/10/tutorial_de_ucm.html e agora quer ir além do básico? Quer começar a criar funcionalidades para o UCM? Quer se tornar um desenvolvedor do UCM? Quer criar o Content Server à sua imagem e semelhança?! Pois hoje é o seu dia de sorte! Neste tutorial, iremos aprender a criar um componente para o Content Server. O nosso primeiro componente, embora não seja tão simples, será feito apenas com recursos do Content Server. Em um futuro tutorial, iremos aprender a usar classes java como parte de nossos componentes. Neste tutorial, vamos desenvolver um recurso de Favoritos, aonde os usuários poderão marcar determinados documentos como seus Favoritos, e depois consultar estes documentos em uma lista. Não iremos montar o componente com todas as suas funcionalidades, mas com o que vocês verão aqui, será tranquilo aprimorar este componente, inclusive para ambientes de produção. Componente MyFavorites Algumas características do nosso componente favoritos: - Por motivos de espaço, iremos montar este componente de uma forma “rápida e crua”, ou seja, sem seguir necessariamente as melhores práticas de desenvolvimento de componentes. Para entender melhor a prática de desenvolvimento de componentes, recomendo a leitura do guia Working With Components. - Ele será desenvolvido apenas para português-Brasil. Outros idiomas podem ser adicionados posteriormente. - Ele irá apresentar uma opção “Adicionar aos Favoritos” no menu “Content Actions” (tela Content Information), para que o usuário possa definir este arquivo como um dos seus favoritos. - Ao clicar neste link, o usuário será direcionado à uma tela aonde ele poderá digitar um comentário sobre este favorito, para facilitar a leitura depois. - Os favoritos ficarão salvos em uma tabela de banco de dados que iremos criar como parte do componente - A aba “My Content Server” terá uma opção nova chamada “Meus Favoritos”, que irá trazer uma tela que lista os favoritos, permitindo que o usuário possa deletar os links - Alguns recursos ficarão de fora deste exercício, novamente por motivos de espaço. Mas iremos listar estes recursos ao final, como exercícios complementares. Recursos do nosso Componente O componente Favoritos será desenvolvido com alguns recursos. Vamos conhecer melhor o que são estes recursos e quais são as suas funções: - Query: Uma query é qualquer atividade que eu preciso executar no banco, o famoso CRUD: Criar, Ler, Atualizar, Deletar. Existem diferentes jeitos de chamar a query, dependendo do propósito: Select Query: executa um comando SQL, mas descarta o resultado. Usado apenas para testar se a conexão com o banco está ok. Não será usado no nosso exercício. Execute Query: executa um comando SQL que altera informações do banco. Pode ser um INSERT, UPDATE ou DELETE. Descarta os resultados. Iremos usar Execute Query para criar, alterar e excluir os favoritos. Select Cache Query: executa um comando SQL SELECT e armazena os resultados em um ResultSet. Este ResultSet retorna como resultado do serviço e pode ser manipulado em IDOC, Java ou outras linguagens. Iremos utilizar Select Cache Query para retornar a lista de favoritos de um usuário. - Service: Os serviços são os responsáveis por executar as queries (ou classes java, mas isso é papo para um outro tutorial...). O serviço recebe os parâmetros de entrada, executa a query e retorna o ResultSet (no caso de um SELECT). Os serviços podem ser executados através de templates, páginas IDOC, outras aplicações (através de API), ou diretamente na URL do browser. Neste exercício criaremos serviços para Criar, Editar, Deletar e Listar os favoritos de um usuário. - Template: Os templates são as interfaces gráficas (páginas) que serão apresentadas aos usuários. Por exemplo, antes de executar o serviço que deleta um documento do favoritos, quero que o usuário veja uma tela com o ID do Documento e um botão Confirma, para que ele tenha certeza que está deletando o registro correto. Esta tela pode ser criada como um template. Neste exercício iremos construir templates para os principais serviços, além da página que lista todos os favoritos do usuário e apresenta as ações de editar e deletar. Os templates nada mais são do que páginas HTML com scripts IDOC. A nossa sequência de atividades para o desenvolvimento deste componente será: - Criar a Tabela do banco - Criar o componente usando o Component Wizard - Criar as Queries para inserir, editar, deletar e listar os favoritos - Criar os Serviços que executam estas Queries - Criar os templates, que são as páginas que irão interagir com os usuários - Criar os links, na página de informações do conteúdo e no painel My Content Server Pois bem, vamos começar! Confira este tutorial na íntegra clicando neste link: http://blogs.oracle.com/ecmbrasil/Tutorial_Componente_Banco.pdf   Happy coding!  :-)

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • 7 reasons you had to be at JavaOne Latin America 2012

    - by Bruno.Borges
    Yesterday was 12/12/12, and everybody went crazy on Twitter with cool memes like this one. And maybe you are now wondering why I mentioned 7 (seven) on the blog title. Because I want to play numbers? Yes! Today is 7 days after JavaOne Latin America 2012 is over (... and I had to figure out an excuse for taking so long to blog about it...). So unless you were at JavaOne Latin America this year, here are 7 things you missed: OTN Lounge mini-theatreThere was a mini-theatre holding several lightning talks. We had people from SouJava JUG, GoJava JUG, Globalcode, and several other Java gurus and companies running demos, talks, and even more. For example, @drspockbr talked about the ScrumToys project, that demonstrates the power of JSF. Hands On Lab for JAX-RS and WebSocketsOne of the cool things to do during JavaOne is to come to these Hands On labs and really do something using new technologies with the help of experts. This one in particular, was covered by me, Arun Gupta, and Reza Rahman. The HOL had more people than laptops (and we had 48 laptops!) interested on understanding and learning about the new stuff that is coming within Java EE 7. Things like JAX-RS, Server-sent Events and WebSockets. Hey, if you want to try this HOL by yourself, it is available on Github, so go for it! If you have questions, just let me know! Java Community KeynoteThis keynote presented a lot of cool things like startups using Java in their projects, the Duke Awards, SouJava winning the JCP Outstanding Award, the Java Band, and even more! It was really a space where the Java community could present what they are doing and what they want to do. There's a lot of interest on the Adopt-a-JSR program and the Adopt-OpenJDK. There's also an Adopt-a-JavaEE-JSR program! Take a look if you want to participate and Make the Future Java. Java EE (JMS, JAX-RS) sessions from Reza Rahman, the HeavyMetal guyReza is a well know professional and Java EE enthusiast from the communitty who just joined Oracle this year. His sessions were very well attended, perhaps because of a high interest on the new things coming to Java EE 7 like JMS 2.0 and JAX-RS 2.0. If you want to look at what he did at this JavaOne edition, read his blog post. By the way, if you like Java and heavymetal, you should follow him on Twitter as well! :-) Java EE (WebSockets, HTML5) sessions from Arun Gupta, the GlassFish guyIf you don't know Arun Gupta, no worries. You will have time to know about him while you read his Java EE 6 Pocket Guide. Arun has been evangelizing Java EE for a long time, and is now spreading his word about the new upcoming version Java EE 7. He gave one talk about HTML5 Productivity on the Java EE 7 platform, and another one on building web apps with WebSockets. Pretty neat! Arun blogged about JavaOne Latin America as well. Read it here. Java Embedded and JavaFXIf there are two things that are really trending in the Java World right now besides Java EE 7, certainly they are JavaFX and Java Embedded. There were 14 talks covering Java Embedded, from Java Cards to Raspberry.pi, from Java ME to Java on your TV with Ginga-J. The Internet of Things is becoming true, and Java is the only platform today that can connect it all in an standardized and concise way. JavaFX gained a lot of attention too. There were 8 sessions covering what the platform has to offer in terms of Rich User Experience. The JavaFX Scene Builder is an awesome tool to start playing designing an UI, and coding for JavaFX is like coding Swing with 8 hands, one holding your coffee cup. You can achieve a lot, with your two hands (unless, you really have 8 hands, then you can achieve 4 times more :-). If you want to read more about JavaFX, go to Stephen Chin's blog post. GlassFish and Friends Party, 1st edition at JavaOne Lating AmericaThis is probably the thing that I'm most proud. We brought to Brasil the tradition of holding a happy hour for all GlassFish, Java EE friends. This party started almost 7 years ago in San Francisco, and it was about time to bring it to Brazil! The party happened on Tuesday night, right after JavaOne General Keynote, at the Tribeca Pub. We had about 80 attendees and met a lot of Java EE developers there! People from JUGs, Oracle, Locaweb and Red Hat showed up too, including some execs from Oracle that didn't resist and could not miss a party like this one.Lots of caipirinhas, beer and food to everyone, some cool music... even The Fish walking around the party with Juggy!You can see more photos from the party on an album I shared with the recently created GlassFish Brasil community on Google+ here (but you may be more interested in joining the GlassFish english community). There's also more pictures that Arun took and shared on this link. So now you may want to consider coming to Brazil next year! Java EE 7 is on its way, and Brazil is happily and patiently waiting for it, with a lot of enthusiasm. By the way, GlassFish and Java EE 6 just celebrated a Happy Birthday!

    Read the article

  • Agent versus Agentless management

    - by Owen Allen
    I got a couple of questions about Agentless asset management: "What does agentless management do for an asset?" Agentless management is one of the two ways that you can manage an operating system. Rather than installing an Agent Controller on the OS, agentless management uses SSH to regularly check the system and gather monitoring data. Many of the actions that would be available on an agent-managed system are available on an agentless system, but actions such as running reports or updating an Oracle Solaris 10 or Linux OS are not available. A table showing the capabilities of agentless management is here. "What permissions does agentless management require?" Agentless management still requires root credentials. If you can't log into the system as root, you can provide one set of credentials for the login, and then a set of root credentials to switch to.

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • About Me

    - by Jeffrey West
    I’m new to blogging.  This is the second blog post that I have written, and before I go too much further I wanted the readers of my blog to know a bit more about me… Kid’s Stuff By trade, I am a programmer (or coder, developer, engineer, architect, etc).  I started programming when I was 12 years old.  When I was 7, we got our first ‘family’ computer – an Apple IIc.  It was great to play games on, and of course what else was a 7-year-old going to do with it.  I did have one problem with it, though.  When I put in my 5.25” floppy to play a game, sometimes, instead loading my game I would get a mysterious ‘]’ on the screen with a flashing cursor.  This, of course, was not my game.  Much like the standard ‘Microsoft fix’ is to reboot, back then you would take the floppy out, shake it, and restart the computer and pray for a different result. One day, I learned at school that I could topple my nemesis – the ‘]’ and flashing cursor – by typing ‘load’ and pressing enter.  Most of the time, this would load my game and then I would get to play.  Problem solved.  However, I began to wonder – what else can I make it do? When I was in 5th grade my dad got a bright idea to buy me a Tandy 1000HX.  He didn’t know what I was going to do with it, and neither did I.  Least of all, my mom wasn’t happy about buying a 5th grader a $1,000 computer.  Nonetheless, Over time, I learned how to write simple basic programs out of the back of my Math book: 10 x=5 20 y=6 30 PRINT x+y That was fun for all of about 5 minutes.  I needed more – more challenges, more things that I could make the computer do.  In order to quench this thirst my parents sent me to National Computer Camps in Connecticut.  It was one of the best experiences of my childhood, and I spent 3 weeks each summer after that learning BASIC, Pascal, Turbo C and some C++.  There weren’t many kids at the time who knew anything about computers, and lets just say my knowledge of and interest in computers didn’t score me many ‘cool’ points.  My experiences at NCC set me on the path that I find myself on now, and I am very thankful for the experience.  Real Life I have held various positions in the past at different levels within the IT layer cake.  I started out as a Software Developer for a startup in the Dallas, TX area building software for semiconductor testing statistical process control and sampling.  I was the second Java developer that was hired, and the ninth employee overall, so I got a great deal of experience developing software.  Since there weren’t that many people in the organization, I also got a lot of field experience which meant that if I screwed up the code, I got yelled at (figuratively) by both my boss AND the customer.  Fun Times!  What made it better was that I got to help run pilot programs in Taiwan, Singapore, Malaysia and Malta.  Getting yelled at in Taiwan is slightly less annoying that getting yelled at in Dallas… I spent the next 5 years at Accenture doing systems integration in the ‘SOA’ group.  I joined as a Consultant and left as a Senior Manager.  I started out writing code in WebLogic Integration and left after I wrapped up project where I led a team of 25 to develop the next generation of a digital media platform to deliver HD content in a digital format.  At Accenture, I had the pleasure of working with some truly amazing people – mentoring some and learning from many others – and on some incredible real-world IT projects.  Given my background with the BEA stack of products I was often called in to troubleshoot and tune WebLogic, ALBPM and ALSB installations and have logged many hours digging through thread dumps, running performance tests with SoapUI and decompiling Java classes we didn’t have the source for so I could see what was going on in the code. I am now a Senior Principal Product Manager at Oracle in the Application Grid practice.  The term ‘Application Grid’ refers to a collection of software and hardware products within Oracle that enables customers to build horizontally scalable systems.  This collection of products includes WebLogic, GlassFish, Coherence, Tuxedo and the JRockit/HotSpot JVMs (HotSprocket, maybe?).  Now, with the introduction of Exalogic it has grown to include hardware as well. Wrapping it up… I love technology and have a diverse background ranging from software development to HW and network architecture & tuning.  I have held certifications for being an Oracle Certified DBA, MSCE and Cisco Certified Network Professional (CCNP), among others and I have put those to great use over my career.  I am excited about programming & technology and I enjoy helping people learn and be successful.  If you are having challenges with WebLogic, BPM or Service Bus feel free to reach out to me and I’ll be happy to help as I have time. Thanks for stopping by!   --Jeff

    Read the article

  • A Plea for Doug

    - by user12652314
    Doug was a key leader in the JCP and did all his research on sparc/solaris. That is until we changed the free patch policy support academics & research post CIC and he and many left in droves entirely pissed off. Well, we're working on a fix now so that all faculty can set-up a server environment, get free patch support and innovate on our stack from OS to virtualization to toolsets in support research, academic use and teaching. Hopefully, just maybe, we can start to bring Doug and the others back home as a result.

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >