Search Results

Search found 418 results on 17 pages for 'db2'.

Page 15/17 | < Previous Page | 11 12 13 14 15 16 17  | Next Page >

  • Why Hibernates ignores the name attribute of the @Column annotation?

    - by svachon
    Using Hibernate 3.3.1 and Hibernate Annotations 3.4, the database is DB2/400 V6R1, running that on WebSphere 7.0.0.9 I have the following class @Entity public class Ciinvhd implements Serializable { @Id private String ihinse; @Id @Column(name="IHINV#") private BigDecimal ihinv; .... } For reasons I can't figure, Hibernate ignores the specified column name and uses 'ihinv' to generate the SQL: select ciinvhd0_.ihinse as ihinse13_, ciinvhd0_.ihinv as ihinv13_, ... Which of course gives me the following error: Column IHINV not in table CIINVHD Did anyone had this problem before? I have other entities that are very alike in the way that they are using # in their database field names and that are part of the PK and I don't have this problem with them.

    Read the article

  • How should I set up these tables for searching?

    - by thewebguy
    My PHP site is an online store with about 5k products. Products belong to a vendor, a category, and possibly a subcategory. Each of those items has a name and the products have descriptions. The search queries we've set up work wonderfully, but tend to run pretty slow. They range between 0.20s and 30s (yes 30 seconds). We've optimized like crazy and I'm starting to think we're out of room to improve on that front, so we're caching them and that's making life a lot easier. But when they run they are still killing the server, because what appears to be all of the table locking that comes with MyISAM. So on to my question: Is there a way for us to use InnoDB (row-level locking) and still maintain FULLTEXT? Should we move our DB offsite and use a service like DB2? Is there some other search engine type software we should use instead? Any help is greatly appreciated :)

    Read the article

  • Sharepoint Web performance optimization

    - by hertzel
    We are running on SSL on following server topology: 1 ISA (SSL Terminate/cache/proxy+AD authentication) 1 Sharepoint 1 IBM DB2 Database as enterprise/corporate DB 1 MS SQL Server as local DB We have recently optimized the caching, compression, minification, and other ASP.net best practices such as viewstate and cookie sizes, minimizing round trips, parallel connections/domain sharding and a lot more.... Now we are not convinced that the we are in an optimized position as the network resources i.e. bandwidth and especially latency are out of our control!! The client/browser to server/sharepoint is trans-Atlantic i.e. (ASIA, USA, EUROPE). As of my understanding the only ways to improve the network (latency) are: - TCP/SSL optimization - hardware/software? - CDNs - cloud or our own ? Your opinion and insights would be much appreciated Best regards Hertzel

    Read the article

  • running jar file with multiple arguments in perl

    - by compiler9999
    Hi All, Im trying to run a jar file. this jar file will output multiple question in console manner, i want to eliminate the console and i need to input a value in order to proceed. e.g : A. Choose value 1 : [1] Windows [2] Unix Input : 2 B. Choose value 2 : [1] Oracle [2] DB2 Input : 1 Im trying : "java -jar program.jar < abc.txt" where abc.txt has a value of : 2 1 3 etc. but its not working its only getting the first value. please help. thanks. btw, ive also try : OPEN PIPE, "| java -jar program.jar"; open (FH, /abc.txt) print PIPE "$res"; close FH; close PIPE; Regards

    Read the article

  • Empty files generated from running `mysqldump` using PHP

    - by alex
    I keep getting empty files generated from running $command = 'mysqldump --opt -h localhost -u username -p \'password\' dbname > \'backup 2009-04-15 09-57-13.sql\''; command($command); Anyone know what might be causing this? My password has strange characters in it, but works fine with connecting to the db. I've ran exec($command, $return) and outputted the $return array and it is finding the command. I've also ran it with mysqldump > file.sql and the file contains Usage: mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS] For more options, use mysqldump --help So it would seem like the command is working.

    Read the article

  • SQL Authority News – Download Microsoft SQL Server 2014 Feature Pack and Microsoft SQL Server Developer’s Edition

    - by Pinal Dave
    Yesterday I attended the SQL Server Community Launch in Bangalore and presented on Performing an effective Presentation. It was a fun presentation and people very well received it. No matter on what subject, I present, I always end up talking about SQL. Here are two of the questions I had received during the event. Q1) I want to install SQL Server on my development server, where can we get it for free or at an economical price (I do not have MSDN)? A1) If you are not going to use your server in a production environment, you can just get SQL Server Developer’s Edition and you can read more about it over here. Here is another favorite question which I keep on receiving it during the event. Q2) I already have SQL Server installed on my machine, what are different feature pack should I install and where can I get them from. A2) Just download and install Microsoft SQL Server 2014 Service Pack. Here is the link for downloading it. The Microsoft SQL Server 2014 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server. It includes tool and components for Microsoft SQL Server 2014 and add-on providers for Microsoft SQL Server 2014. Here is the list of component this product contains: Microsoft SQL Server Backup to Windows Azure Tool Microsoft SQL Server Cloud Adapter Microsoft Kerberos Configuration Manager for Microsoft SQL Server Microsoft SQL Server 2014 Semantic Language Statistics Microsoft SQL Server Data-Tier Application Framework Microsoft SQL Server 2014 Transact-SQL Language Service Microsoft Windows PowerShell Extensions for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Shared Management Objects Microsoft Command Line Utilities 11 for Microsoft SQL Server Microsoft ODBC Driver 11 for Microsoft SQL Server – Windows Microsoft JDBC Driver 4.0 for Microsoft SQL Server Microsoft Drivers 3.0 for PHP for Microsoft SQL Server Microsoft SQL Server 2014 Transact-SQL ScriptDom Microsoft SQL Server 2014 Transact-SQL Compiler Service Microsoft System CLR Types for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Remote Blob Store SQL RBS codeplex samples page SQL Server Remote Blob Store blogs Microsoft SQL Server Service Broker External Activator for Microsoft SQL Server 2014 Microsoft OData Source for Microsoft SQL Server 2014 Microsoft Balanced Data Distributor for Microsoft SQL Server 2014 Microsoft Change Data Capture Designer and Service for Oracle by Attunity for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Master Data Service Add-in for Microsoft Excel Microsoft SQL Server StreamInsight Microsoft Connector for SAP BW for Microsoft SQL Server 2014 Microsoft SQL Server Migration Assistant Microsoft SQL Server 2014 Upgrade Advisor Microsoft OLEDB Provider for DB2 v5.0 for Microsoft SQL Server 2014 Microsoft SQL Server 2014 PowerPivot for Microsoft SharePoint 2013 Microsoft SQL Server 2014 ADOMD.NET Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Analysis Management Objects Microsoft SQL Server Report Builder for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Reporting Services Add-in for Microsoft SharePoint Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Using linked servers, OPENROWSET and OPENQUERY

    - by BuckWoody
    SQL Server has a few mechanisms to reach out to another server (even another server type) and query data from within a Transact-SQL statement. Among them are a set of stored credentials and information (called a Linked Server), a statement that uses a linked server called called OPENQUERY, another called OPENROWSET, and one called OPENDATASOURCE. This post isn’t about those particular functions or statements – hit the links for more if you’re new to those topics. I’m actually more concerned about where I see these used than the particular method. In many cases, a Linked server isn’t another Relational Database Management System (RDMBS) like Oracle or DB2 (which is possible with a linked server), but another SQL Server. My concern is that linked servers are the new Data Transformation Services (DTS) from SQL Server 2000 – something that was designed for one purpose but which is being morphed into something much more. In the case of DTS, most of us turned that feature into a full-fledged job system. What was designed as a simple data import and export system has been pressed into service doing logic, routing and timing. And of course we all know how painful it was to move off of a complex DTS system onto SQL Server Integration Services. In the case of linked servers, what should be used as a method of running a simple query or two on another server where you have occasional connection or need a quick import of a small data set is morphing into a full federation strategy. In some cases I’ve seen a complex web of linked servers, and when credentials, names or anything else changes there are huge problems. Now don’t get me wrong – linked servers and other forms of distributing queries is a fantastic set of tools that we have to move data around. I’m just saying that when you start having lots of workarounds and when things get really complicated, you might want to step back a little and ask if there’s a better way. Are you able to tolerate some latency? Perhaps you’re able to use Service Broker. Would you like to be platform-independent on the data source? Perhaps a middle-tier might make more sense, abstracting the queries there and sending them to the proper server. Designed properly, I’ve seen these systems scale further and be more resilient than loading up on linked servers. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Monitora&ccedil;&atilde;o com Oracle Enteprrise Manager

    - by fernando.galdino
    A figura abaixo oferece uma visão geral das possibilidades de monitoramento providas pelo Oracle Enterprise Manager (OEM), que é uma ferramenta que permite gerenciar a infraestrutura de TI da empresa. Um componente importante da solução é chanado OEM Grid Control. Esse componente permite gerenciar, visualizar e monitorar diversos elementos a partir de uma mesma console. E que elementos podem ser monitorados? No conceito utilizado pelo OEM, os elementos que podem ser monitorados são chamados de Targets, e esses targets envolvem a monitoração de hosts (Windows, Linux, Solaris), Banco de Dados, Middleware, Aplicações Web, Serviços que podem ser customizados pelo administrador, Sistemas e Grupos de targets, além dos aplicativos Oracle. Cada elemento monitorado é ativado através de packs de gerenciamento. Ou seja, há uma série de packs que podem ser adquiridas conforme a necessidade, para permitir a monitoração a partir do próprio OEM Grid Control. Existem packs de monitoramento especiais para banco de dados Oracle, packs de monitoramento para Tomcat, Jboss, WebLogic, SOA Suite, Identity Management. A lista é bem extensa e darei mais detalhes em um novo post. Mas caso queira visitar, veja: http://download.oracle.com/docs/cd/B16240_01/doc/nav/overview.htm Além das packs de monitoramento, existem também plugins e conectores. Os plugins permitem o gerenciamento de elementos adicionais, tais como dispositivos de rede, servidores, banco de dados de terceiros (DB2, SQL Server), Vmware, etc. Já os conectores permitem a integração com outros softwares, tais como gerenciadores de requisições de helpdesk, de modo a integrar os alertas gerados pela ferramenta e gerar tickets em ferramentas como CA Service Desk, BMC Remedy e outros. A extensão de funcionalidades é realmente bem vasta. Num próximo post irei comentar sobre o Ops Center, um novo componente que surgiu após a aquisição da Sun. Além do Grid Control e do Ops Center, há outros componentes bem interessantes. A figura abaixo ilustra diversas camadas onde o ferramental Oracle pode ser usado para monitoração. Há uma pack que permite gerenciar os níveis de serviços em todas as camadas ilustradas. Dada uma requisição, pode-se decompor os dados de SLA em cada camada. E há também o Real User Monitoring, que trata de medir a experiência com o usuário. Falarei disso num novo post, mas basicamente a ferramenta permite acompanhar todo o tráfego de rede gerado dos usuários finais até os servidores web, e com isso rastrear como cada usuário usa a aplicação, quanto tempo ele navega pelo site, se ele enfrentou algum tipo de problema, se houve algum pedido não finalizado devido a algum problema na infraestrutura. É uma ferramenta bem interessante, falarei um pouco mais dela depois. E claro, há também componentes para a realização de testes funcionais e de carga. Em breve, aqui no blog :)

    Read the article

  • Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers

    - by mv
    Configuring Oracle iPlanet Web Server / Oracle Traffic Director to use crypto accelerators on T4-1 servers Jyri had written a technical article on Configuring Solaris Cryptographic Framework and Sun Java System Web Server 7 on Systems With UltraSPARC T1 Processors. I tried to find out what has changed since then in T4. I have used a T4-1 SPARC system with Solaris 10. Results slightly vary for Solaris 11.  For Solaris 11, the T4 optimization was implemented in libsoftcrypto.so while it was in pkcs11_softtoken_extra.so for Solaris 10. Overview of T4 processors is here in this blog. Many thanx to Chi-Chang Lin and Julien for their help. 1. Install Oracle iPlanet Web Server / Oracle Traffic Director.  Go to instance/config directory.  # cd /opt/oracle/webserver7/https-hostname.fqdn/config 2. List default PKCS#11 Modules # ../../bin/modutil -dbdir . -listListing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. Root Certslibrary name: libnssckbi.soslots: 1 slot attachedstatus: loadedslot: NSS Builtin Objectstoken: Builtin Object Token----------------------------------------------------------- 3. Initialize the soft token data store in the $HOME/.sunw/pkcs11_softtoken/ directory # pktool setpin keystore=pkcs11Enter token passphrase: olderpasswordCreate new passphrase: passwordRe-enter new passphrase: passwordPassphrase changed. 4. Offload crypto operations to Solaris Crypto Framework on T4 $ ../../bin/modutil -dbdir . -nocertdb -add SCF -libfile /usr/lib/libpkcs11.so -mechanisms RSA:AES:SHA1:MD5 Module "SCF" added to database. Note that -nocertdb means modutil won't try to open the NSS softoken key database. It doesn't even have to be present. PKCS#11 library used is /usr/lib/libpkcs11.so. If the server is running in 64 bit mode, we have to use /usr/lib/64/libpkcs11.so Unlike T1 and T2, in T4 we do not have to disable mechanisms in softtoken provider using cryptoadm. 5. List again to check that a new module SCF is added # ../../bin/modutil -dbdir . -list Listing of PKCS #11 Modules-----------------------------------------------------------1. NSS Internal PKCS #11 Moduleslots: 2 slots attachedstatus: loadedslot: NSS Internal Cryptographic Servicestoken: NSS Generic Crypto Servicesslot: NSS User Private Key and Certificate Servicestoken: NSS Certificate DB2. SCFlibrary name: /usr/lib/libpkcs11.soslots: 2 slots attachedstatus: loadedslot: Sun Metaslottoken: Sun Metaslotslot: n2rng/0 SUNW_N2_Random_Number_Generator token: n2rng/0 SUNW_N2_RNG 3. Root Certs library name: libnssckbi.so slots: 1 slot attached status: loaded slot: NSS Builtin Objects token: Builtin Object Token----------------------------------------------------------- 6.  Create certificate in “Sun Metaslot” : I have used certutil, but you must use Admin Server CLI / GUI # ../../bin/certutil -S -x -n "Server-Cert" -t "CT,CT,CT" -s "CN=*.fqdn" -d . -h "Sun Metaslot"Enter Password or Pin for "Sun Metaslot": password 7. Verify that the certificate is created properly in “Sun Metslaot” # ../../bin/certutil -L -d . -h "Sun Metaslot"Certificate Nickname Trust AttributesSSL,S/MIME,JAR/XPIEnter Password or Pin for "Sun Metaslot": passwordSun Metaslot:Server-Cert CTu,Cu,Cu# 8. Associate this newly created certificate to http listener using Admin CLI/GUI. After that server.xml should have <http-listener> ...    <ssl>        <server-cert-nickname>Sun Metaslot:Server-Cert</server-cert-nicknamer>    </ssl> Note the prefix "Sun Metaslot" 9. Disable PKCS#11 bypass To use the accelerated AES algorithm, turn off PKCS#11 bypass, and configure modutil to have the AES mechanism go to the Metaslot. After you disable PKCS#11 bypasss using Admin GUI/CLI,  check that server.xml should have <server> ....    <pkcs11>         <enabled>1</enabled>         <allow-bypass>0</allow-bypass>     </pkcs11> With PKCS#11 bypass enabled, Oracle iPlanet Web Server will only use the RSA capability of the T4, provided certificate and key are stored in the T4 slot (Metaslot). Actually, the RSA op is never bypassed in NSS, it's always done with PKCS#11 calls. So the bypass settings won't affect the behavior of the probes for RSA at all. The only thing that matters if where the RSA key and certificate live, ie. which PKCS#11 token, and thus which PKCS#11 module gets called to do the work. If your certificate/key are in the NSS certificate/key db, you will see libsoftokn3/libfreebl libraries doing the RSA work. If they are in the Sun Metaslot, it should be the Solaris code. 10. Start the server instance # ../bin/startserv Oracle iPlanet Web Server 7.0.16 B09/14/2012 03:33Please enter the PIN for the "Sun Metaslot" token: password...info: HTTP3072: http-listener-1: https://hostname.fqdn:80 ready to accept requestsinfo: CORE3274: successful server startup 11. Figure out which process to run this DTrace script on # ps -eaf | grep webservd | grep -v dogwebservd 18224 18223 0 13:17:25 ? 0:07 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/root 18225 18224 0 13:17:25 ? 0:00 webservd -d /opt/oracle/webserver7/https-hostname.fqdn/config -r /opt/ (For Oracle Traffic Director look for process named "trafficd") We see that the child process id is “18225” 12. Clients for testing : You can use any browser. I used NSS tool tstclnt for testing $cat > req.txtGET /index.html HTTP/1.0 For checking both RSA and AES, I used cipher “:0035” which is TLS_RSA_WITH_AES_256_CBC_SHA $./tstclnt -h hostname -p 80 -d . -T -f -o -v -c “:0035” < req.txt 13. How do I make sure that crypto accelerator is being used 13.1 Create DTrace script The following D script should be able to uncover whether T4-specific crypto routine are being called or not. It also displays stats per second. # cat > t4crypto.d#!/usr/sbin/dtrace -spid$target::*rsa*:entry,pid$target::*yf*:entry{    @ops[probemod, probefunc] = count();}tick-1sec{    printa(@ops);    trunc(@ops);} Invoke with './t4crypto.d -p <pid> ' 13.2 EXPECTED PROBES FOR Solaris 10 : If offloading to T4 HW are correctly set up, the expected DTrace output would have these probes and libraries library Operations PROBES pkcs11_softtoken_extra.so RSA soft_decrypt_rsa_pkcs_decode, soft_encrypt_rsa_pkcs_encode soft_rsa_crypt_init_common soft_rsa_decrypt, soft_rsa_encrypt soft_rsa_decrypt_common, soft_rsa_encrypt_common AES yf_aes_instructions_present yf_aes_expand256, yf_aes256_cbc_decrypt, yf_aes256_cbc_encrypt, yf_aes256_load_keys_for_decrypt, yf_aes256_load_keys_for_encrypt, Note that these are for 256, same for 128, 192... these are for cbc, same for ecb, ctr, cfb128... DES yf_des_expand, yf_des_instructions_present yf_des_encrypt libmd_psr.so MD5 yf_md5_multiblock, yf_md5_instruction_present SHA1 yf_sha1_instruction_present, yf_sha1_multibloc 13.3 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITHOUT PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode    1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common      1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt                1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                   2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                    2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                    2 pkcs11_softtoken_extra.so.1   rijndael_key_setup_enc_yf       2 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common         2 pkcs11_softtoken_extra.so.1   yf_aes_expand256                2 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_decrypt           3 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_decrypt 3 pkcs11_softtoken_extra.so.1   big_mont_mul_yf                 6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                   6 pkcs11_softtoken_extra.so.1   yf_des_instructions_present     6 pkcs11_softtoken_extra.so.1   yf_aes256_cbc_encrypt           8 pkcs11_softtoken_extra.so.1   yf_aes256_load_keys_for_encrypt 8 pkcs11_softtoken_extra.so.1   yf_mpmul_present                8 pkcs11_softtoken_extra.so.1   yf_aes_instructions_present    13 pkcs11_softtoken_extra.so.1   yf_des_encrypt                 18 libmd_psr.so.1                yf_md5_multiblock              41 libmd_psr.so.1                yf_md5_instruction_present     72 libmd_psr.so.1                yf_sha1_instruction_present    82 libmd_psr.so.1                yf_sha1_multiblock             82 This indicates that both RSA and AES ops are done in Solaris Crypto Framework. 13.4 SAMPLE OUTPUT FOR CIPHER TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) ON T4 SPARC SOLARIS 10 WITH PKCS#11 BYPASS # ./t4crypto.d -p 18225 pkcs11_softtoken_extra.so.1   soft_decrypt_rsa_pkcs_decode 1 pkcs11_softtoken_extra.so.1   soft_rsa_crypt_init_common   1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt             1 pkcs11_softtoken_extra.so.1   soft_rsa_decrypt_common      1 pkcs11_softtoken_extra.so.1   big_mp_mul_yf                2 pkcs11_softtoken_extra.so.1   mpm_yf_mpmul                 2 pkcs11_softtoken_extra.so.1   mpmul_arr_yf                 2 pkcs11_softtoken_extra.so.1   big_mont_mul_yf              6 pkcs11_softtoken_extra.so.1   mm_yf_montmul                6 pkcs11_softtoken_extra.so.1   yf_mpmul_present             8 For this cipher, when I enable PKCS#11 bypass, Only RSA probes are being hit AES probes are not being hit. 13.5 ustack() for RSA operations / probefunc == "soft_rsa_decrypt" / Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so for both cases with and without bypass. When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`soft_unwrapkey+0x258 pkcs11_softtoken_extra.so.1`C_UnwrapKey+0x1ec libpkcs11.so.1`meta_unwrap_key+0x17c libpkcs11.so.1`meta_UnwrapKey+0xc4 libpkcs11.so.1`C_UnwrapKey+0xfc libnss3.so`pk11_AnyUnwrapKey+0x6b8 libnss3.so`PK11_PubUnwrapSymKey+0x8c libssl3.so`ssl3_HandleRSAClientKeyExchange+0x1a0 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc When PKCS#11 bypass is enabled (allow-bypass is 1) pkcs11_softtoken_extra.so.1`soft_rsa_decrypt pkcs11_softtoken_extra.so.1`soft_rsa_decrypt_common+0x94 pkcs11_softtoken_extra.so.1`C_Decrypt+0x164 libpkcs11.so.1`meta_do_operation+0x27c libpkcs11.so.1`meta_Decrypt+0x4c libpkcs11.so.1`C_Decrypt+0xcc libnss3.so`PK11_PrivDecryptPKCS1+0x1ac libssl3.so`ssl3_HandleRSAClientKeyExchange+0xe4 libssl3.so`ssl3_HandleClientKeyExchange+0x154 libssl3.so`ssl3_HandleHandshakeMessage+0x440 libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec libssl3.so`ssl_SecureRecv+0x1c8 libssl3.so`ssl_Recv+0x9c libns-httpd40.so`__1cNDaemonSessionDrun6M_v_+0x2dc libnsprwrap.so`ThreadMain+0x1c libnspr4.so`_pt_root+0xe8 13.6 ustack() FOR AES operations / probefunc == "yf_aes256_cbc_encrypt" / When PKCS#11 bypass is disabled (allow-bypass is 0) pkcs11_softtoken_extra.so.1`yf_aes256_cbc_encrypt pkcs11_softtoken_extra.so.1`aes_block_process_contiguous_whole_blocks+0xb4 pkcs11_softtoken_extra.so.1`aes_crypt_contiguous_blocks+0x1cc pkcs11_softtoken_extra.so.1`soft_aes_encrypt_common+0x22c pkcs11_softtoken_extra.so.1`C_EncryptUpdate+0x10c libpkcs11.so.1`meta_do_operation+0x1fc libpkcs11.so.1`meta_EncryptUpdate+0x4c libpkcs11.so.1`C_EncryptUpdate+0xcc libnss3.so`PK11_CipherOp+0x1a0 libssl3.so`ssl3_CompressMACEncryptRecord+0x264 libssl3.so`ssl3_SendRecord+0x300 libssl3.so`ssl3_FlushHandshake+0x54 libssl3.so`ssl3_SendFinished+0x1fc libssl3.so`ssl3_HandleFinished+0x314 libssl3.so`ssl3_HandleHandshakeMessage+0x4ac libssl3.so`ssl3_HandleHandshake+0x11c libssl3.so`ssl3_HandleRecord+0x5e8 libssl3.so`ssl3_GatherCompleteHandshake+0x5c libssl3.so`ssl_GatherRecord1stHandshake+0x30 libssl3.so`ssl_Do1stHandshake+0xec Shows that libnss3.so is calling C_* functions of libpkcs11.so which is calling functions of pkcs11_softtoken_extra.so However when PKCS#11 bypass is disabled (allow-bypass is 1) this stack isn't getting called. 14. LIST OF ALL THE PROBES MATCHED BY D SCRIPT FOR REFERENCE # ./t4crypto.d -p 18225 -l ID PROVIDER MODULE FUNCTION NAME ... 55720 pid18225 libmd_psr.so.1 yf_md5_instruction_present entry 55721 pid18225 libmd_psr.so.1 yf_sha256_instruction_present entry 55722 pid18225 libmd_psr.so.1 yf_sha512_instruction_present entry 55723 pid18225 libmd_psr.so.1 yf_sha1_instruction_present entry 55724 pid18225 libmd_psr.so.1 yf_sha256 entry 55725 pid18225 libmd_psr.so.1 yf_sha256_multiblock entry 55726 pid18225 libmd_psr.so.1 yf_sha512 entry 55727 pid18225 libmd_psr.so.1 yf_sha512_multiblock entry 55728 pid18225 libmd_psr.so.1 yf_sha1 entry 55729 pid18225 libmd_psr.so.1 yf_sha1_multiblock entry 55730 pid18225 libmd_psr.so.1 yf_md5 entry 55731 pid18225 libmd_psr.so.1 yf_md5_multiblock entry 55732 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_instructions_present entry 55733 pid18225 pkcs11_softtoken_extra.so.1 rijndael_key_setup_enc_yf entry 55734 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand128 entry 55735 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt128 entry 55736 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt128 entry 55737 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand192 entry 55738 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt192 entry 55739 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt192 entry 55740 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_expand256 entry 55741 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_encrypt256 entry 55742 pid18225 pkcs11_softtoken_extra.so.1 yf_aes_decrypt256 entry 55743 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_encrypt entry 55744 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_encrypt entry 55745 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_encrypt entry 55746 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_encrypt entry 55747 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_encrypt entry 55748 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_encrypt entry 55749 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_encrypt entry 55750 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_encrypt entry 55751 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_encrypt entry 55752 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ctr_crypt entry 55753 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ctr_crypt entry 55754 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ctr_crypt entry 55755 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_encrypt entry 55756 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_encrypt entry 55757 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_encrypt entry 55758 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_load_keys_for_decrypt entry 55759 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_load_keys_for_decrypt entry 55760 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_load_keys_for_decrypt entry 55761 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_ecb_decrypt entry 55762 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_ecb_decrypt entry 55763 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_ecb_decrypt entry 55764 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cbc_decrypt entry 55765 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cbc_decrypt entry 55766 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cbc_decrypt entry 55767 pid18225 pkcs11_softtoken_extra.so.1 yf_aes128_cfb128_decrypt entry 55768 pid18225 pkcs11_softtoken_extra.so.1 yf_aes192_cfb128_decrypt entry 55769 pid18225 pkcs11_softtoken_extra.so.1 yf_aes256_cfb128_decrypt entry 55771 pid18225 pkcs11_softtoken_extra.so.1 yf_des_instructions_present entry 55772 pid18225 pkcs11_softtoken_extra.so.1 yf_des_expand entry 55773 pid18225 pkcs11_softtoken_extra.so.1 yf_des_encrypt entry 55774 pid18225 pkcs11_softtoken_extra.so.1 yf_mpmul_present entry 55775 pid18225 pkcs11_softtoken_extra.so.1 yf_montmul_present entry 55776 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montmul entry 55777 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_montsqr entry 55778 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_restore_func entry 55779 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_ret_from_mont_func entry 55780 pid18225 pkcs11_softtoken_extra.so.1 mm_yf_execute_slp entry 55781 pid18225 pkcs11_softtoken_extra.so.1 big_modexp_ncp_yf entry 55782 pid18225 pkcs11_softtoken_extra.so.1 big_mont_mul_yf entry 55783 pid18225 pkcs11_softtoken_extra.so.1 mpmul_arr_yf entry 55784 pid18225 pkcs11_softtoken_extra.so.1 big_mp_mul_yf entry 55785 pid18225 pkcs11_softtoken_extra.so.1 mpm_yf_mpmul entry 55786 pid18225 libns-httpd40.so nsapi_rsa_set_priv_fn entry ... 55795 pid18225 libnss3.so prepare_rsa_priv_key_export_for_asn1 entry 55796 pid18225 libresolv.so.2 sunw_dst_rsaref_init entry 55797 pid18225 libnssutil3.so NSS_Get_SEC_UniversalStringTemplate entry ... 55813 pid18225 libsoftokn3.so prepare_low_rsa_priv_key_for_asn1 entry 55814 pid18225 libsoftokn3.so rsa_FormatOneBlock entry 55815 pid18225 libsoftokn3.so rsa_FormatBlock entry 55816 pid18225 libnssdbm3.so lg_prepare_low_rsa_priv_key_for_asn1 entry 55817 pid18225 libfreebl_32fpu_3.so rsa_build_from_primes entry 55818 pid18225 libfreebl_32fpu_3.so rsa_is_prime entry 55819 pid18225 libfreebl_32fpu_3.so rsa_get_primes_from_exponents entry 55820 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpNoCRT entry 55821 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTNoCheck entry 55822 pid18225 libfreebl_32fpu_3.so rsa_PrivateKeyOpCRTCheckedPubKey entry 55823 pid18225 pkcs11_kernel.so.1 key_gen_rsa_by_value entry 55824 pid18225 pkcs11_kernel.so.1 get_rsa_private_key entry 55825 pid18225 pkcs11_kernel.so.1 get_rsa_public_key entry 55826 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt entry 55827 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt entry 55828 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_crypt_init_common entry 55829 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_encrypt_common entry 55830 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_decrypt_common entry 55831 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_verify_init_common entry 55832 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_sign_common entry 55833 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_common entry 55834 pid18225 pkcs11_softtoken_extra.so.1 generate_rsa_key entry 55835 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_genkey_pair entry 55836 pid18225 pkcs11_softtoken_extra.so.1 get_rsa_sha1_prefix entry 55837 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_sign_common entry 55838 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_digest_verify_common entry 55839 pid18225 pkcs11_softtoken_extra.so.1 soft_rsa_verify_recover entry 55840 pid18225 pkcs11_softtoken_extra.so.1 rsa_pri_to_asn1 entry 55841 pid18225 pkcs11_softtoken_extra.so.1 asn1_to_rsa_pri entry 55842 pid18225 pkcs11_softtoken_extra.so.1 soft_encrypt_rsa_pkcs_encode entry 55843 pid18225 pkcs11_softtoken_extra.so.1 soft_decrypt_rsa_pkcs_decode entry 55844 pid18225 pkcs11_softtoken_extra.so.1 soft_sign_rsa_pkcs_encode entry 55845 pid18225 pkcs11_softtoken_extra.so.1 soft_verify_rsa_pkcs_decode entry 55770 profile tick-1sec

    Read the article

  • ORM Profiler v1.1 has been released!

    - by FransBouma
    We've released ORM Profiler v1.1, which has the following new features: Real time profiling A real time viewer (RTV) has been added, which gives insight in the activity as it is received by the client, in two views: a chronological connection overview and an activity graph overview. This RTV allows the user to directly record to a snapshot using record buttons, pause the view, mark a range to create a snapshot from that range, and view graphs about the # of connection open actions and # of commands per second. The RTV has a 'range' in which it keeps live data and auto-cleans data that's older than this range. Screenshot of the activity graphs part of the real-time viewer: Low-level activity tab A new tab has been added to the Application tabs: the Low-level activity tab. This tab shows the main activity as it has been received over the named pipe. It can help to get insight in the chronological activity without the grouping over connections, so multiple connections at the same time per thread are easier to spot. Clicking a command will sync the rest of the application tabs, clicking a row will show the details below the splitter bar, as it is done with the other application tabs as well. Default application name in interceptor When an empty string or null is passed for application name to the Initialize method of the interceptor, the AppDomain's friendly name is used instead. Copy call stack to clipboard A call stack viewed in a grid in various parts of the UI is now copyable to the clipboard by clicking a button. Enable/Disable interceptor from the config file It's now possible to enable/disable the interceptor Initialization from the application's config file, using: Code: <appSettings> <add key="ORMProfilerEnabled" value="true"/> </appSettings> if value is true, the interceptor's Initialize method will proceed. If the value is false, the interceptor's Initialize method will not proceed and initialization won't be performed, meaning no interception will take place. If the setting is absent, or misconfigured, the Initialize method will proceed as normal and perform the initialization. Stored procedure calls for select databases are now properly displayed as a call For the databases: SQL Server, Oracle, DB2, Sybase ASA, Sybase ASE and Informix a stored procedure call is displayed as an execute/call statement and copy to clipboard works as-is. I'm especially happy with the new real-time profiling feature in ORM Profiler, which is the flagship feature for this release: it offers a completely new way to use the profiler, namely directly during debugging: you can immediately see what's going on without the necessity of a snapshot. The activity graph feature combined with the auto-cleanup of older data, allows you to keep the profiler open for a long period of time and see any spike of activity on the profiled application.

    Read the article

  • ODI 11g - Cleaning control characters and User Functions

    - by David Allan
    In ODI user functions have a poor name really, they should be user expressions - a way of wrapping common expressions that you may wish to reuse many times - across many different technologies is an added bonus. To illustrate look at the problem of how to remove control characters from text. Users ask these types of questions over all technologies - Microsoft SQL Server, Oracle, DB2 and for many years - how do I clean a string, how do I tokenize a string and so on. After some searching around you will find a few ways of doing this, in Oracle there is a convenient way of using the TRANSLATE and REPLACE functions. So you can convert some text using the following SQL; replace( translate('This is my string'||chr(9)||' which has a control character', chr(3)||chr(4)||chr(5)||chr(9), chr(3) ), chr(3), '' ) If you had many columns to perform this kind of transformation on, in the Oracle database the natural solution you'd go to would be to code this as a PLSQL function since you don't want the code splattered everywhere. Someone tells you that there is another control character that needs added equals a maintenance headache. Coding it as a PLSQL function will incur a context switch between SQL and PLSQL which could prove costly. In ODI user functions let you capture this expression text and reference it many times across your mappings. This will protect the expression from being copy-pasted by developers and make maintenance much simpler - change the expression definition in one place. Firstly define a name and a syntax for the user function, I am calling it UF_STRIP_BAD_CHARACTERS and it has one parameter an input string;  We then can define an implementation for each technology we will use it, I will define Oracle's using the inputString parameter and the TRANSLATE and REPLACE functions with whatever control characters I want to replace; I can then use this inside mapping expressions in ODI, below I am cleaning the ENAME column - a fabricated example but you get the gist.  Note when I use the user function the function name remains in the text of the mapping, the actual expression is not substituted until I generate the scenario. If you generate the scenario and export the scenario you can have a peak at the code that is processed in the runtime - below you can see a snippet of my export scenario;  That's all for now, hopefully a useful snippet of info.

    Read the article

  • Maximum Availability with Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Database offers a variety of built-in and optional products for maximum availability, and it is well known for its robust high availability and disaster recovery solutions. With its heterogeneous, real-time, transactional data movement capabilities, Oracle GoldenGate is a key part of the Maximum Availability Architecture for Oracle Database. This week on Thursday Dec. 13th we will be presenting in a live webcast how Oracle GoldenGate fits into Oracle Database Maximum Availability Architecture (MAA). Joe Meeks from the Oracle Database High Availability team will discuss how Oracle GoldenGate complements other key products within MAA such as Active Data Guard. Nick Wagner from GoldenGate PM team will present how to upgrade to latest Oracle Database release without any downtime. Nick will also cover 2 new features of  Oracle GoldenGate 11gR2:  Integrated Capture for Oracle Database and Automated Conflict Detection and Resolution. Nick will provide in depth review of these new features with examples. Oracle GoldenGate also offers maximum availability for non-Oracle databases, such as HP NonStop, SQL Server, DB2 (LUW, iSeries, or zSeries) and more. The same robust, reliable real-time, bidirectional data movement capabilities apply to all supported databases.  I'd like to invite you to join us on Thursday Dec. 13th 10am PT/1pm ET to hear from the product experts on how to use GoldenGate for maximizing database availability and to ask your questions. You can find the registration link below. Webcast: Maximum Availability with Oracle GoldenGate Thursday Dec. 13th 10am PT/1pm ET Look forward to another great webcast with lots of interaction with the audience. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Vagrant ssh fails with VirtualBox

    - by lukewm
    vagrant up fails when it gets to the ssh part: myterminal$ vagrant up [default] VM already created. Booting if its not already running... [default] Running any VM customizations... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- db2: 30003 => 30003 (adapter 1) [default] Cleaning previously set shared folders... [default] Creating shared folders metadata... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] Failed to connect to VM! Failed to connect to VM via SSH. Please verify the VM successfully booted by looking at the VirtualBox GUI. Then when I subsequently try and connect using vagrant ssh or vagrant reload or similar, I get this: myterminal$ vagrant reload [default] Attempting graceful shutdown of linux... SSH connection was refused! This usually happens if the VM failed to boot properly. Some steps to try to fix this: First, try reloading your VM with `vagrant reload`, since a simple restart sometimes fixes things. If that doesn't work, destroy your VM and recreate it with a `vagrant destroy` followed by a `vagrant up`. If that doesn't work, contact a Vagrant maintainer (support channels listed on the website) for more assistance. Please help! I'm really stumped. Kind regards, Luke

    Read the article

  • Lost partition after restarting

    - by nxhoaf
    I have Window 7 Professional Service pack installed in my Laptop Lenovo Thinkpad t420. After formatting the disk, and install Window 7 (detailed as above), I went to Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition: C (which is 162gb) E (which is 140gb) Is work fine for about 2 days. Today, when I turn on my computer, I'm very suprise that the E partition is disappear. I can surely confirm that I didn't do any stupid thing yesterday. And before I shut down my computer, everything was fine. In general, here is what I did during the last today (from the point that I formatted the disk, and installed Window) Format 300gb hard disk Install window 7 Install eclipse, db2, .... ( I'm a developer) Install some other tools (Open office, Skype...) Install PGP (http://www.symantec.com/encryption) <--- I'm forced to used that due to my company policy Use Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition as described above. It worked quite well for two last days. Until day... Can you please help me to recover my lost partition ? Thank you! For more info, here is my partition info: You can also see the image here

    Read the article

  • Running two different websites domains one one IP address

    - by Akshar Prabhu Desai
    Here is my apache configuration file. I have two domain names running on same ip but i want them to point to different webapps. But in this case both point to the one intended for e-yantra.org. If I copy paste akshar.co.in part before E-yantra.org both start pointing to akshar.co.in I have two A DNS entries (one per domain name) pointing to the same IP. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.e-yantra.org ServerAdmin [email protected] DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/ci/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/db2/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName www.akshar.co.in ServerAdmin [email protected] DocumentRoot /var/akshar.co.in <Directory /var/akshar.co.in/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>

    Read the article

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • Join multiple consecutive SQLite database dump files into 1 common database? Purpose: Search through ENTIRE Chrome Browsing History

    - by porg
    Google Chrome 's default web browsing history search engine only lets you access the records of the recent 100 days. Nevertheless in your application data, Chrome keeps your entire browsing history in SQLite database files, with the file naming scheme of "History Index YYYY-MM". I am looking for a way to search… …through my entire browsing history, …with sophisticated filters (limit search terms to certain fields such as URL, domain, title, body text; wildcard or regex terms, date ranges). … in … …either some ready-made software. eHistory came close, as it can limit terms to fields, but it lacks wildcards/regexes, and has the same limited time horizon as the default search. Beyond that, I could not find any suited Chrome extension or standalone (Mac) app. …or a command line to join multiple SQLite database files into one database, which I can then query (with the full syntax power). In the spirit of the pseudo code below: Preferred this way: sqlite --targetDatabase ChromeHistoryAll --importFiles /path/to/ChromeAppData/History\ Index* --importOnlyYetUnknownFiles Or if my desired feature --importOnlyYetUnknownFiles is not possible (feature could also be called "avoid duplicate imports by checking UIDs"), then by explicitly only importing files, of which I know, that they have yet not been imported into the ChromeHistoryAll database: cd ChromeAppData; sqlite --databaseTarget ChromeHistoryAll --importFiles YetNotImported1 YetNotImported2 YetNotImported3 All my queries I would then perform in the database "ChromeHistoryAll" P.S.: Additional question of general interest: Is there a way to perform a database query in a temporary database which was created on-the-fly from multiple files? Like: sqlite --query="SQL query" --targetDatabase DbAll --DBtemporaryInRAM --importFiles db1 db2 db3 This is surely not applicable for my Chrome question, as these History Index files have a combined file size of 500MB together, thus such a query would be of bad performance. But it could come handy in other situations.

    Read the article

  • Sun Fire X4800 M2 Posts World Record x86 SPECjEnterprise2010 Result

    - by Brian
    Oracle's Sun Fire X4800 M2 using the Intel Xeon E7-8870 processor and Sun Fire X4470 M2 using the Intel Xeon E7-4870 processor, produced a world record single application server SPECjEnterprise2010 benchmark result of 27,150.05 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server ran the application tier and the Sun Fire X4470 M2 server was used for the database tier. The Sun Fire X4800 M2 server demonstrated 63% better performance compared to IBM P780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The Sun Fire X4800 M2 server demonstrated 4% better performance than the Cisco UCS B440 M2 result, both results used the same number of processors. This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_02, and Oracle Database 11g. This result was produced using Oracle Linux. Performance Landscape Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below compares against the best results from IBM and Cisco. SPECjEnterprise2010 Performance Chart as of 3/12/2012 Submitter EjOPS* Application Server Database Server Oracle 27,150.05 1x Sun Fire X4800 M2 8x 2.4 GHz Intel Xeon E7-8870 Oracle WebLogic 12c 1x Sun Fire X4470 M2 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) Cisco 26,118.67 2x UCS B440 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle WebLogic 11g (10.3.5) 1x UCS C460 M2 Blade Server 4x 2.4 GHz Intel Xeon E7-4870 Oracle Database 11g (11.2.0.2) IBM 16,646.34 1x IBM Power 780 8x 3.86 GHz POWER 7 WebSphere Application Server V7 1x IBM Power 750 Express 4x 3.55 GHz POWER 7 IBM DB2 9.7 Workgroup Server Edition FP3a * SPECjEnterprise2010 EjOPS, bigger is better. Configuration Summary Application Server: 1 x Sun Fire X4800 M2 8 x 2.4 GHz Intel Xeon processor E7-8870 256 GB memory 4 x 10 GbE NIC 2 x FC HBA Oracle Linux 5 Update 6 Oracle WebLogic Server 11g Release 1 (10.3.5) Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_02 (Java SE 7 Update 2) Database Server: 1 x Sun Fire X4470 M2 4 x 2.4 GHz Intel Xeon E7-4870 512 GB memory 4 x 10 GbE NIC 2 x FC HBA 2 x Sun StorageTek 2540 M2 4 x Sun Fire X4270 M2 4 x Sun Storage F5100 Flash Array Oracle Linux 5 Update 6 Oracle Database 11g Enterprise Edition Release 11.2.0.2 Benchmark Description SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems. The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans. The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network. The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark. Key Points and Best Practices Sixteen Oracle WebLogic server instances were started using numactl, binding 2 instances per chip. Eight Oracle database listener processes were started, binding 2 instances per chip using taskset. Additional tuning information is in the report at http://spec.org. See Also Oracle Press Release -- SPECjEnterprise2010 Results Page Sun Fire X4800 M2 Server oracle.com OTN Sun Fire X4270 M2 Server oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Oracle Linux oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Disclosure Statement SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Fire X4800 M2, 27,150.05 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 3/27/2012.

    Read the article

  • It was worth the wait… Welcome Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} It certainly was worth the wait to meet Oracle GoldenGate 11gR2, because it is full of new features on multiple fronts. In fact, this release has the longest and strongest list of new features in Oracle GoldenGate’s history. The new release brings GoldenGate closer to the Oracle Database while expanding the support for global implementations and heterogeneous systems. It is more secure, more flexible, and faster. We announced the availability of Oracle GoldenGate 11gR2 via a press release. If you haven’t seen it yet, please check it out. As covered in this announcement, there are a variety of improvements in the product: Integrated Capture for Oracle Database: brings Oracle GoldenGate’s Capture process closer to the Oracle Database engine and enables support for Advanced Compression among other benefits. Enhanced Conflict Detection & Resolution, speeds and simplifies the conflict detection and resolution process for Active-Active deployments. Globalization, meaning Oracle GoldenGate can be deployed for databases that use multi-byte/Unicode character sets. Security and Performance Improvements, includes support Federal Information Protection Standard (FIPS). Increased Extensibility by kicking off actions based on an event record in the transaction log or in the Trail file. Integration with Oracle Enterprise Manager 12c , in addition to the Oracle GoldenGate Monitor product. Expanded Heterogeneity, including capture from IBM DB2 for i on iSeries (AS/400) and delivery to Postgres We will explain these new features in more detail at our upcoming launch webcast: Harness the Power of the New Release of Oracle GoldenGate 11g- (Sept 12 8am/10am PT) In addition to learning more about these new features, the webcast will allow you to ask your questions to product management via live Q&A section. So, I hope you will not miss this opportunity to explore the new release of Oracle GoldenGate 11g and see how it can deliver enterprise-class real-time data integration solutions.. I look forward to a great webcast to unveil GoldenGate’s new capabilities.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • Entity Framework - Single EMDX Mapping Multiple Database

    - by michaelalisonalviar
    Because of my recent craze on Entity Framework thanks to Sir Humprey, I have continuously searched the Internet for tutorials on how to apply it to our current system. So I've come to learn that with EF, I can eliminate the numerous coding of methods/functions for CRUD operations, my overly used assigning of connection strings, Data Adapters or Data Readers as Entity Framework will map my desired database and will do its magic to create entities for each table I want (using EF Powertool) and does all the methods/functions for my Crud Operations. But as I begin applying it to a new project I was assigned to, I realized our current server is designed to contain each similar entities in different databases. For example Our lookup tables are stored in LookupDb, Accounting-related tables are in AccountingDb, Sales-related tables in SalesDb. My dilemma is I have to use an existing table from LookupDB and use it as a look-up for my new table. Then I have found Miss Rachel's Blog (here)Thank You Miss Rachel!  which enables me to let EF think that my TableLookup1 is in the AccountingDB using the following steps. Im on VS 2010, I am using C# , Using Entity Framework 5, SQL Server 2008 as our DB ServerStep 1:Creating A SQL Synonym. If you want a more detailed discussion on synonyms, this was what i have read -> (link here). To simply put it, A synonym enabled me to simplify my query for the Look-up table when I'm using the AccountingDB fromSELECT [columns] FROM LookupDB.dbo.TableLookup1toSELECT [columns] FROM TableLookup1Syntax: CREATE SYNONYM  TableLookup1(1) FOR LookupDB.dbo.TableLookup1 (2)1. What you want to call the table on your other DB2. DataBaseName.schema.TableNameStep 2: We will now follow Miss Rachel's steps. you can either visit the link on the original topic I posted earlier or just follow the step I made.1. I created a Visual Basic Solution that will contain the 4 projects needed to complete the merging2. First project will contain the edmx file pointing to the AccountingDB3. Second project will contain the edmx file pointing to the LookupDB4. Third Project will will be our repository of the merged edmx file. Create an edmx file pointing To AccountingDB as this the database that we created the Synonym on.Reminder: Aside from using the same name for the Entities, please make sure that you have the same Model Namespace for all your Entities  5. Fourth project that will contain the beautiful EDMX merger that Miss Rachel created that will free you from Hard coding of the merge/recoding the Edmx File of the third project everytime a change is done on either one of the first two projects' Edmx File.6. Run the solution, but make sure that on the solutions properties Single startup project is selected and the project containing the EDMX merger is selected.7. After running the solution, double click on the EDMX file of the 3rd project and set Lazy Loading Enabled = False. This will let you use the tables/entities that you see in that EDMX File.8. Feel free to do your CRUD Operations.I don't know if EF 5 already has a feature to support synonyms as I am still a newbie on that aspect but I have seen a linked where there are supposed suggestions on Entity Framework upgrades and one is the "Support for multiple databases"  So that's it! Thanks for reading!

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

< Previous Page | 11 12 13 14 15 16 17  | Next Page >