Search Results

Search found 13859 results on 555 pages for 'non functional'.

Page 403/555 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Dutch for once: op zoek naar een nieuwe uitdaging!

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2013/10/11/dutch-for-once-op-zoek-naar-een-nieuwe-uitdaging.aspxI apologize to my non-dutch speaking readers: this post is about me looking for a new job and since I am based in the Netherlands I will do this in Dutch… Next time I will be technical (and thus in English) again! Het leuke van interim zijn is dat een klus een keer afloopt. Ik heb heel bewust gekozen voor het leven als freelancer: ik wil graag heel veel verschillende mensen en organisaties leren kennen. Dit werk is daar bij uitstek geschikt voor! Immers: bij iedere klus breng ik niet alleen nieuwe ideeën en kennis maar ik leer zelf ook iedere keer ontzettend veel. Die kennis kan ik dan weer gebruiken bij een vervolgklus en op die manier verspreid ik die kennis onder de bedrijven in Nederland. En er is niets leukers dan zien dat wat ik meebreng een organisatie naar een ander niveau brengt! Iedere keer een ander bedrijf zoeken houdt in dat ik iedere keer weg moet gaan bij een organisatie. Het lastige daarvan is het juiste moment te vinden. Van buitenaf gezien is dat lastig in te schatten: wanneer kan ik niets vernieuwends meer bijdragen en is het tijd om verder te gaan? Wanneer is het tijd om te zeggen dat de organisatie alles weet wat ik ze kan bijbrengen? In mijn huidige klus is dat moment nu aangebroken. In de afgelopen elf maanden heb ik dit bedrijf zien veranderen van een kleine maar enthousiaste groep ontwikkelaars naar een professionele organisatie met ruim twee keer zo veel ontwikkelaars. Dat veranderingsproces is erg leerzaam geweest en ik ben dan ook erg blij dat ik die verandering heb kunnen en mogen begeleiden. Van drie teams met ieder vijf of zes ontwikkelaars naar zes teams met zeven tot acht ontwikkelaars per team groeien betekent dat je je ontwikkelproces heel anders moet insteken. Ook houdt dat in dat je je teams anders moet indelen, dat de organisatie zelf anders gemodelleerd moet worden en dat mensen anders met elkaar om moeten gaan. Om dat voor elkaar te krijgen is er door iedereen heel hard gewerkt, is er een aantal fouten gemaakt, is heel veel van die fouten geleerd en is uiteindelijk een vrijwel nieuw bedrijf ontstaan. Het is tijd om dit bedrijf te verlaten. Ik ben benieuwd waar ik hierna terecht kom: ik ben aan het rondkijken naar mogelijkheden. Ik weet wèl: het bedrijf waar ik naar op zoek ben, is een bedrijf dat openstaat voor veranderingen. Veranderingen, maar dan wel met het oog voor het individu; mensen staan immers centraal in de software ontwikkeling! Ik heb er in ieder geval weer zin in!

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • Sun Fire X4800 M2 Delivers World Record TPC-C for x86 Systems

    - by Brian
    Oracle's Sun Fire X4800 M2 server equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The Sun Fire X4800 M2 server delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 06/26/12. The Sun Fire X4800 M2 server delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors. The Sun Fire X4800 M2 server has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570. The Sun Fire X4800 M2 server has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips. The Sun Fire X4800 M2 server is the first x86 system to get over 5 million tpmC. The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance. Performance Landscape Select TPC-C results (sorted by tpmC, bigger is better) System p/c/t tpmC Price/tpmC Avail Database MemorySize Sun Fire X4800 M2 8/80/160 5,055,888 0.89 USD 6/26/2012 Oracle 11g R2 4 TB IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB p/c/t - processors, cores, threads Avail - availability date Oracle and IBM TPC-C Response times System tpmC Response Time (sec) New Order 90th% Response Time (sec) New Order Average Sun Fire X4800 M2 5,055,888 0.210 0.166 IBM x3850 X5 3,014,684 0.500 0.272 Ratios - Oracle Better 1.6x 1.4x 1.3x Oracle uses average new order response time for comparison between Oracle and IBM. Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page. Configuration Summary and Results Hardware Configuration: Server Sun Fire X4800 M2 server 8 x 2.4 GHz Intel Xeon Processor E7-8870 4 TB memory 8 x 300 GB 10K RPM SAS internal disks 8 x Dual port 8 Gbs FC HBA Data Storage 10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 10 x 2 TB 7.2K RPM 3.5" SAS disks 2 x Sun Storage F5100 Flash Array storage (1.92 TB each) 1 x Brocade 5300 switches Redo Storage 2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with 1 x 3.06 GHz Intel Xeon X5675 processor 8 GB memory 11 x 2 TB 7.2K RPM 3.5" SAS disks Clients 8 x Sun Fire X4170 M2 servers, each with 2 x 3.06 GHz Intel Xeon X5675 processors 48 GB memory 2 x 300 GB 10K RPM SAS disks Software Configuration: Oracle Linux (Sun Fire 4800 M2) Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2) Oracle Solaris 10 9/10 (Sun Fire X4170 M2) Oracle Database 11g Release 2 Enterprise Edition with Partitioning Oracle iPlanet Web Server 7.0 U5 Tuxedo CFS-R Tier 1 Results: System: Sun Fire X4800 M2 tpmC: 5,055,888 Price/tpmC: 0.89 USD Available: 6/26/2012 Database: Oracle Database 11g Cluster: no New Order Average Response: 0.166 seconds Benchmark Description TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. Key Points and Best Practices Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance. COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules. Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark. See Also Oracle Press Release -- Sun Fire X4800 M2 TPC-C Executive Summary tpc.org Complete Sun Fire X4800 M2 TPC-C Full Disclosure Report tpc.org Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page Sun Fire X4800 M2 Server oracle.com OTN Oracle Linux oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage F5100 Flash Array oracle.com OTN Disclosure Statement TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Fire X4800 M2 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 6/26/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

    Read the article

  • Frequent Disconnects ubuntu desktop 12.10 x64 intel 82579V e1000e

    - by user112055
    I'm having frequent disconnects with my new install of Ubuntu 12.10. I tried updating the kernel driver to the latest intel release to no avail. My expertise is spent. It happens anywhere between 1 min and 10 min. Any ideas? syslog: Dec 1 13:51:39 andromeda kernel: [ 972.188809] audit_printk_skb: 6 callbacks suppressed Dec 1 13:51:39 andromeda kernel: [ 972.188813] type=1701 audit(1354398699.418:199): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188817] type=1701 audit(1354398699.418:200): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188820] type=1701 audit(1354398699.418:201): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188823] type=1701 audit(1354398699.418:202): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.188825] type=1701 audit(1354398699.418:203): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=4 compat=0 ip=0x7f26777d9205 code=0x50000 Dec 1 13:51:39 andromeda kernel: [ 972.331419] type=1701 audit(1354398699.558:204): auid=4294967295 uid=1000 gid=1000 ses=4294967295 pid=6039 comm="chrome" reason="seccomp" sig=0 syscall=2 compat=0 ip=0x7f26777d96b0 code=0x50000 Dec 1 13:53:12 andromeda NetworkManager[1115]: <info> (eth0): carrier now OFF (device state 100, deferring action for 4 seconds) Dec 1 13:53:12 andromeda kernel: [ 1064.894387] e1000e: e1000e: eth0 NIC Link is Down Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> (eth0): device state change: activated -> unavailable (reason 'carrier-changed') [100 20 40] Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> (eth0): deactivating device (reason 'carrier-changed') [40] Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> (eth0): canceled DHCP transaction, DHCP client pid 5946 Dec 1 13:53:16 andromeda avahi-daemon[890]: Withdrawing address record for fe80::ea40:f2ff:fee2:4d86 on eth0. Dec 1 13:53:16 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:53:16 andromeda avahi-daemon[890]: Interface eth0.IPv6 no longer relevant for mDNS. Dec 1 13:53:16 andromeda kernel: [ 1069.025288] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Dec 1 13:53:16 andromeda avahi-daemon[890]: Withdrawing address record for 192.168.11.17 on eth0. Dec 1 13:53:16 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:53:16 andromeda avahi-daemon[890]: Interface eth0.IPv4 no longer relevant for mDNS. Dec 1 13:53:16 andromeda NetworkManager[1115]: <warn> DNS: plugin dnsmasq update failed Dec 1 13:53:16 andromeda NetworkManager[1115]: <info> ((null)): removing resolv.conf from /sbin/resolvconf Dec 1 13:53:16 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:53:16 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:53:16 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): carrier now ON (device state 20) Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Auto-activating connection '82579V'. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) starting connection '82579V' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Dec 1 13:53:32 andromeda kernel: [ 1084.938042] e1000e: e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx Dec 1 13:53:32 andromeda kernel: [ 1084.938049] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO Dec 1 13:53:32 andromeda kernel: [ 1084.938815] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> dhclient started with pid 6080 Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Dec 1 13:53:32 andromeda dhclient: Internet Systems Consortium DHCP Client 4.2.4 Dec 1 13:53:32 andromeda dhclient: Copyright 2004-2012 Internet Systems Consortium. Dec 1 13:53:32 andromeda dhclient: All rights reserved. Dec 1 13:53:32 andromeda dhclient: For info, please visit https://www.isc.org/software/dhcp/ Dec 1 13:53:32 andromeda dhclient: Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed nbi -> preinit Dec 1 13:53:32 andromeda dhclient: Listening on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:53:32 andromeda dhclient: Sending on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:53:32 andromeda dhclient: Sending on Socket/fallback Dec 1 13:53:32 andromeda dhclient: DHCPREQUEST of 192.168.11.17 on eth0 to 255.255.255.255 port 67 Dec 1 13:53:32 andromeda dhclient: DHCPACK of 192.168.11.17 from 192.168.11.1 Dec 1 13:53:32 andromeda dhclient: bound to 192.168.11.17 -- renewal in 33576 seconds. Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed preinit -> reboot Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> address 192.168.11.17 Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> prefix 24 (255.255.255.0) Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> gateway 192.168.11.1 Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> hostname 'andromeda' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> nameserver '192.168.11.1' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> domain name 'hsd1.ca.comcast.net' Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Dec 1 13:53:32 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Dec 1 13:53:32 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:53:32 andromeda avahi-daemon[890]: New relevant interface eth0.IPv4 for mDNS. Dec 1 13:53:32 andromeda avahi-daemon[890]: Registering new address record for 192.168.11.17 on eth0.IPv4. Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> (eth0): device state change: ip-config -> activated (reason 'none') [70 100 0] Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> ((null)): writing resolv.conf to /sbin/resolvconf Dec 1 13:53:33 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:53:33 andromeda dnsmasq[1907]: using nameserver 192.168.11.1#53 Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> Policy set '82579V' (eth0) as default for IPv4 routing and DNS. Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> Activation (eth0) successful, device activated. Dec 1 13:53:33 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Dec 1 13:53:33 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:53:33 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:53:33 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:53:33 andromeda avahi-daemon[890]: New relevant interface eth0.IPv6 for mDNS. Dec 1 13:53:33 andromeda avahi-daemon[890]: Registering new address record for fe80::ea40:f2ff:fee2:4d86 on eth0.*. Dec 1 13:53:41 andromeda ntpdate[6154]: adjust time server 91.189.94.4 offset 0.000928 sec Dec 1 13:53:50 andromeda NetworkManager[1115]: <info> (eth0): carrier now OFF (device state 100, deferring action for 4 seconds) Dec 1 13:53:50 andromeda kernel: [ 1102.980003] e1000e: e1000e: eth0 NIC Link is Down Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> (eth0): device state change: activated -> unavailable (reason 'carrier-changed') [100 20 40] Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> (eth0): deactivating device (reason 'carrier-changed') [40] Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> (eth0): canceled DHCP transaction, DHCP client pid 6080 Dec 1 13:53:54 andromeda avahi-daemon[890]: Withdrawing address record for fe80::ea40:f2ff:fee2:4d86 on eth0. Dec 1 13:53:54 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:53:54 andromeda avahi-daemon[890]: Interface eth0.IPv6 no longer relevant for mDNS. Dec 1 13:53:54 andromeda avahi-daemon[890]: Withdrawing address record for 192.168.11.17 on eth0. Dec 1 13:53:54 andromeda avahi-daemon[890]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:53:54 andromeda kernel: [ 1107.025959] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Dec 1 13:53:54 andromeda NetworkManager[1115]: <warn> DNS: plugin dnsmasq update failed Dec 1 13:53:54 andromeda NetworkManager[1115]: <info> ((null)): removing resolv.conf from /sbin/resolvconf Dec 1 13:53:54 andromeda avahi-daemon[890]: Interface eth0.IPv4 no longer relevant for mDNS. Dec 1 13:53:54 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:53:54 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:53:54 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): carrier now ON (device state 20) Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Auto-activating connection '82579V'. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) starting connection '82579V' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Dec 1 13:54:10 andromeda kernel: [ 1123.167668] e1000e: e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx Dec 1 13:54:10 andromeda kernel: [ 1123.167675] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO Dec 1 13:54:10 andromeda kernel: [ 1123.168430] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> dhclient started with pid 6212 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Dec 1 13:54:10 andromeda dhclient: Internet Systems Consortium DHCP Client 4.2.4 Dec 1 13:54:10 andromeda dhclient: Copyright 2004-2012 Internet Systems Consortium. Dec 1 13:54:10 andromeda dhclient: All rights reserved. Dec 1 13:54:10 andromeda dhclient: For info, please visit https://www.isc.org/software/dhcp/ Dec 1 13:54:10 andromeda dhclient: Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed nbi -> preinit Dec 1 13:54:10 andromeda dhclient: Listening on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:54:10 andromeda dhclient: Sending on LPF/eth0/e8:40:f2:e2:4d:86 Dec 1 13:54:10 andromeda dhclient: Sending on Socket/fallback Dec 1 13:54:10 andromeda dhclient: DHCPREQUEST of 192.168.11.17 on eth0 to 255.255.255.255 port 67 Dec 1 13:54:10 andromeda dhclient: DHCPACK of 192.168.11.17 from 192.168.11.1 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> (eth0): DHCPv4 state changed preinit -> reboot Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> address 192.168.11.17 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> prefix 24 (255.255.255.0) Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> gateway 192.168.11.1 Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> hostname 'andromeda' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> nameserver '192.168.11.1' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> domain name 'hsd1.ca.comcast.net' Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Dec 1 13:54:10 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Dec 1 13:54:10 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.11.17. Dec 1 13:54:10 andromeda dhclient: bound to 192.168.11.17 -- renewal in 35416 seconds. Dec 1 13:54:10 andromeda avahi-daemon[890]: New relevant interface eth0.IPv4 for mDNS. Dec 1 13:54:10 andromeda avahi-daemon[890]: Registering new address record for 192.168.11.17 on eth0.IPv4. Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> (eth0): device state change: ip-config -> activated (reason 'none') [70 100 0] Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> ((null)): writing resolv.conf to /sbin/resolvconf Dec 1 13:54:11 andromeda dnsmasq[1907]: setting upstream servers from DBus Dec 1 13:54:11 andromeda dnsmasq[1907]: using nameserver 192.168.11.1#53 Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> Policy set '82579V' (eth0) as default for IPv4 routing and DNS. Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> Activation (eth0) successful, device activated. Dec 1 13:54:11 andromeda NetworkManager[1115]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Dec 1 13:54:11 andromeda dbus[800]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Dec 1 13:54:11 andromeda dbus[800]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 1 13:54:12 andromeda avahi-daemon[890]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::ea40:f2ff:fee2:4d86. Dec 1 13:54:12 andromeda avahi-daemon[890]: New relevant interface eth0.IPv6 for mDNS. Dec 1 13:54:12 andromeda avahi-daemon[890]: Registering new address record for fe80::ea40:f2ff:fee2:4d86 on eth0.*. Dec 1 13:54:19 andromeda ntpdate[6286]: adjust time server 91.189.94.4 offset 0.001142 sec $ lspci -v 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Subsystem: Intel Corporation Device 2031 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at f7f00000 (32-bit, non-prefetchable) [size=128K] Memory at f7f39000 (32-bit, non-prefetchable) [size=4K] I/O ports at f040 [size=32] Capabilities: [c8] Power Management version 2 Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [e0] PCI Advanced Features Kernel driver in use: e1000e Kernel modules: e1000e $ modinfo e1000e filename: /lib/modules/3.5.0-19-generic/kernel/drivers/net/e1000e/e1000e.ko version: 2.1.4-NAPI license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, <[email protected]> srcversion: 0809529BE0BBC44883956AF alias: pci:v00008086d0000153Bsv*sd*bc*sc*i* alias: pci:v00008086d0000153Asv*sd*bc*sc*i* alias: pci:v00008086d00001503sv*sd*bc*sc*i* alias: pci:v00008086d00001502sv*sd*bc*sc*i* alias: pci:v00008086d000010F0sv*sd*bc*sc*i* alias: pci:v00008086d000010EFsv*sd*bc*sc*i* alias: pci:v00008086d000010EBsv*sd*bc*sc*i* alias: pci:v00008086d000010EAsv*sd*bc*sc*i* alias: pci:v00008086d00001525sv*sd*bc*sc*i* alias: pci:v00008086d000010DFsv*sd*bc*sc*i* alias: pci:v00008086d000010DEsv*sd*bc*sc*i* alias: pci:v00008086d000010CEsv*sd*bc*sc*i* alias: pci:v00008086d000010CDsv*sd*bc*sc*i* alias: pci:v00008086d000010CCsv*sd*bc*sc*i* alias: pci:v00008086d000010CBsv*sd*bc*sc*i* alias: pci:v00008086d000010F5sv*sd*bc*sc*i* alias: pci:v00008086d000010BFsv*sd*bc*sc*i* alias: pci:v00008086d000010E5sv*sd*bc*sc*i* alias: pci:v00008086d0000294Csv*sd*bc*sc*i* alias: pci:v00008086d000010BDsv*sd*bc*sc*i* alias: pci:v00008086d000010C3sv*sd*bc*sc*i* alias: pci:v00008086d000010C2sv*sd*bc*sc*i* alias: pci:v00008086d000010C0sv*sd*bc*sc*i* alias: pci:v00008086d00001501sv*sd*bc*sc*i* alias: pci:v00008086d00001049sv*sd*bc*sc*i* alias: pci:v00008086d0000104Dsv*sd*bc*sc*i* alias: pci:v00008086d0000104Bsv*sd*bc*sc*i* alias: pci:v00008086d0000104Asv*sd*bc*sc*i* alias: pci:v00008086d000010C4sv*sd*bc*sc*i* alias: pci:v00008086d000010C5sv*sd*bc*sc*i* alias: pci:v00008086d0000104Csv*sd*bc*sc*i* alias: pci:v00008086d000010BBsv*sd*bc*sc*i* alias: pci:v00008086d00001098sv*sd*bc*sc*i* alias: pci:v00008086d000010BAsv*sd*bc*sc*i* alias: pci:v00008086d00001096sv*sd*bc*sc*i* alias: pci:v00008086d0000150Csv*sd*bc*sc*i* alias: pci:v00008086d000010F6sv*sd*bc*sc*i* alias: pci:v00008086d000010D3sv*sd*bc*sc*i* alias: pci:v00008086d0000109Asv*sd*bc*sc*i* alias: pci:v00008086d0000108Csv*sd*bc*sc*i* alias: pci:v00008086d0000108Bsv*sd*bc*sc*i* alias: pci:v00008086d0000107Fsv*sd*bc*sc*i* alias: pci:v00008086d0000107Esv*sd*bc*sc*i* alias: pci:v00008086d0000107Dsv*sd*bc*sc*i* alias: pci:v00008086d000010B9sv*sd*bc*sc*i* alias: pci:v00008086d000010D5sv*sd*bc*sc*i* alias: pci:v00008086d000010DAsv*sd*bc*sc*i* alias: pci:v00008086d000010D9sv*sd*bc*sc*i* alias: pci:v00008086d00001060sv*sd*bc*sc*i* alias: pci:v00008086d000010A5sv*sd*bc*sc*i* alias: pci:v00008086d000010BCsv*sd*bc*sc*i* alias: pci:v00008086d000010A4sv*sd*bc*sc*i* alias: pci:v00008086d0000105Fsv*sd*bc*sc*i* alias: pci:v00008086d0000105Esv*sd*bc*sc*i* depends: vermagic: 3.5.0-19-generic SMP mod_unload modversions parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint) parm: TxIntDelay:Transmit Interrupt Delay (array of int) parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int) parm: RxIntDelay:Receive Interrupt Delay (array of int) parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int) parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int) parm: IntMode:Interrupt Mode (array of int) parm: SmartPowerDownEnable:Enable PHY smart power down (array of int) parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int) parm: CrcStripping:Enable CRC Stripping, disable if your BMC needs the CRC (array of int) parm: EEE:Enable/disable on parts that support the feature (array of int) parm: Node:[ROUTING] Node to allocate memory on, default -1 (array of int) parm: debug:Debug level (0=none,...,16=all) (int)

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • Tron: Legacy, 3D goggles, and embedded UA

    - by Roger Hart
    The 3D edition of Tron: Legacy opens with embedded user assistance. The film starts with an iconic white-on-black command-prompt message exhorting viewers to keep their 3D glasses on throughout. I can't quote it verbatim, and at the time of writing nor could anybody findable with 5 minutes of googling. But it was something like: "Although parts of the movie are 2D, it was shot in 3D, and glasses should be worn at all times. This is how it was intended to be viewed" Yeah - "intended". That part is verbatim. Wow. Now, I appreciate that even out of the small sub-set of readers who care a rat's ass for critical theory, few will be quite so gung-ho for the whole "death of the author" shtick as I tend to be. And yes, this is ergonomic rather than interpretive, but really - telling an audience how you expect them to watch a movie? That's up there with Big Steve's "you're holding it wrong" Even if it solves the problem, it's pretty arrogant. If anything, it's worse than RTFM. And if enough people are doing it wrong that you have to include the announcement, then maybe - just maybe - you've got a UX and/or design problem. Plus, current 3D glasses are like sitting in a darkened room, cosplaying the lovechild of Spider Jerusalem and Jarvis Cocker. Ok, so that observation was weirder than it was helpful; but seriously, nobody wants to wear the glasses if they don't have to. They ruin the visual experience of the non-3D sections, and personally, I find them pretty disruptive to the suspension of disbelief. This is an old, old, problem, and I'm carping on about it because Tron is enjoyable mass-market slush. It's easier for me to say "no, I can't just put some text on it. It's fundamentally broken, redesign it." in the middle of a small-ish, agile, software project than it would be for some beleaguered production assistant at the end of editing a $200 million movie. But lots of folks in software don't even get to do that. Way more people are going to see Tron, and be annoyed by this, than will ever read a technical communication blog. So hopefully, after two hours of being mildly annoyed, wanting to turn the brightness up, and slowly getting a headache, they'll realise something very, very important: you just can't document your way out of a shoddy UI.

    Read the article

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • How do I get the Apple Wireless Keyboard Working in 10.10?

    - by Jamie
    So I've gone and bought a Magic Mouse and Apple Wireless Non-Numeric Keyboard. The magic mouse worked out-of-the-box almost perfectly, except for the forward/back gesture which still isn't functioning, whereas the keyboard didn't. It has constant trouble with the bluetooth connection. Only the 7, 8 and 9 buttons and volume media keys correspond correctly with the output. Pressing every single key on keyboard has this output: 789/=456*123-0.+ When I use Blueman the keyboard can be setup and shows up in "Devices" but I get a warning when I click "Setup"; "Device added successfully, but failed to connect" (although removing the keyboard and setting it up as a new device doesn't incur this error). Using gnome-bluetooth I have encountered no error messages but it connects properly less often than Blueman and I can still only type the aforementioned output. What am I not doing? Where is this going wrong? EDIT: I have read this http://ubuntuforums.org/showthread.php?t=224673 inside out several times to no avail. It seems these commands don't work for me with the apple peripherals sudo hidd --search hcitool scan Fortunately I have the luxury of a 1TB hard drive, near limitless patience and no job. I have installed a fresh Ubuntu 10.10 64bit (albeit smaller than mine) and after updating and restarting for the first time, I set up my devices in exactly the same way as I have learnt on my original install I succeeded once again with the mouse and, to my joy, with the keyboard also. Though I could not seem to find Alt+F2 and had to reconfigure that and several other keyboard shortcuts, the keyboard is working and in a spectacular fashion. Still, this leaves me with the issue of my original install. I returned to it with some new found knowledge but failed again. Perhaps I have a missing dependancy? I did uninstall bluetooth after the initial set up and reinstalled it recently for the pupose of these peripherals. Maybe it's because I'm running 64bit? This is still not solved, but easily avoided by not changing too much from the original install. Just hide stuff or turn it off, don't uninstall too much.

    Read the article

  • My thoughts on the future of the web with respect to flash, plugins, etc…

    - by joelvarty
    More than 10 years ago I was coding Java applets.  They were great at the time because I could reasonably expect them to run the same way in Netscape and Internet Explorer.  I could also reliably do asynchronous networking back to the server.  But then, Microsoft pulled their native Java runtime from Windows and Internet Explorer.  It got a lot harder to get applets running in people’s browsers. So I started writing ActiveX controls for IE and Java applets for Netscape. Then I switched to Flash, not for too long, but it was enough for me to see that it was a capable and curious implementation of animation, multimedia and script. I even wrote a few Silverlight controls, but then I stopped. I stepped back from all of the “richness” and “interactivity” and I thought about things like accessibility and SEO.  I wondered how my apps and sites might appear to the greater world.  I wondered how the developers I am working with, or who might be inheriting my code down the road, might interact with it. And I thought to myself, What the hell was I thinking? Those embedded controls are not what the web is about, and they run contrary to nearly all of the things that makes the web exciting and fosters innovation within and around.   Those plugins or controls, or whatever you want to refer to them as, are only stop-gaps that fill a hole in the basic HTML/Script/CSS specifications, and that’s all they should ever be used for.  Full stop.  Period.  For instance, I still make use of a nifty little flash control called SWFUpload because it lets me check file size before an upload starts.  I can do the same thing from a Silverlight control.  But rest assured, if I could do this from native javascript, I would in a second.  In fact, the only reason I chose SWFUpload over a ton of other alternatives is that it has a great javascript API so I can do (nearly) all of the UI in regular HTML.  And I ALWAYS provide a non-flash alternative for uploading, and for the rest of any website where the designer has insisted on some piece of creativity that requires flash (usually because the designer is also the flash developer, but that’s an aside…). The web is about openness, and about exposing that openness in such a way that it can be taken advantage of as a small part of a greater whole.  Sure we need security and authentication and ssl and all that stuff, but for me, its something more profound.  For me, the majority of what the web is, is about exposing something that delivers meaning.  What meaning can we derive from an <object> tag?   more later - joel

    Read the article

  • Requesting Delegation (ActAs) Tokens using WSTrustChannel (as opposed to Configuration Madness)

    - by Your DisplayName here!
    Delegation using the ActAs approach has some interesting security features A security token service can make authorization and validation checks before issuing the ActAs token. Combined with proof keys you get non-repudiation features. The ultimate receiver sees the original caller as direct caller and can optionally traverse the delegation chain. Encryption and audience restriction can be tied down Most samples out there (including the SDK sample) use the CreateChannelActingAs extension method from WIF to request ActAs tokens. This method builds on top of the WCF binding configuration which may not always be suitable for your situation. You can also use the WSTrustChannel to request ActAs tokens. This allows direct and programmatic control over bindings and configuration and is my preferred approach. The below method requests an ActAs token based on a bootstrap token. The returned token can then directly be used with the CreateChannelWithIssued token extension method. private SecurityToken GetActAsToken(SecurityToken bootstrapToken) {     var factory = new WSTrustChannelFactory(         new UserNameWSTrustBinding(SecurityMode.TransportWithMessageCredential),         new EndpointAddress(_stsAddress));     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.UserName.UserName = "middletier";     factory.Credentials.UserName.Password = "abc!123";     var rst = new RequestSecurityToken     {         AppliesTo = new EndpointAddress(_serviceAddress),         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Symmetric,         ActAs = new SecurityTokenElement(bootstrapToken)     };     var channel = factory.CreateChannel();     var delegationToken = channel.Issue(rst);     return delegationToken; }   HTH

    Read the article

  • Building InstallShield based Installers using Team Build 2010

    - by jehan
    Last few weeks, I have been working on Application Packaging stuff using all the widely used tools like InstallShield, WISE, WiX and Visual Studio Installer. So, I thought it would be good to post about how to Build the Installers developed using these tools with Team Build 2010. This post will focus on how to build the InstallShield generated packages using Team Build 2010. For the release of VS2010, Microsoft has partnered with Flexera who are the makers of InstallShield to create InstallShield Limited Edition, especially for the customers of Visual Studio. First Microsoft planned to release WiX (Windows Installer Xml) with VS2010, but later Microsoft dropped  WiX from VS2010 due to reasons which are best known to them and partnered with InstallShield for Limited Edition. It disappointed lot of people because InstallShield Limited Edition provides only few features of InstallShield and it may not feasable to build complex installer packages using this and it also requires License, where as WiX is an open source with no license costs and it has proved efficient in building most complex packages. Only the last three features are available in InstallShield Limited Edition from the total features offered by InstallShield as shown in below list.                                                                                            Feature Limited Edition for Visual Studio 2010 Standalone Build System Maintain a clean build machine by using only the part of InstallShield that compiles the installations. InstallShield Best Practices Validation Suite Avoid common installation issues. Try and Die Functionality RCreate a fully functional trial version of your product. InstallShield Repackager Create Windows Installer setups from any legacy installation. Multilingual Support Present installation text in up to 35 languages. Microsoft App-V™ Support Deploy your applications as App-V virtual packages that run without conflict. Industry-Standard InstallScript Achieve maximum flexibility in your installations. Dialog Editor Modify the layout of existing end-user dialogs, create new custom dialogs, and more. Patch Creation Build updates and patches for your products. Setup Prerequisite Editor Easily control prerequisite restart behavior and source locations. String Editor View Control the localizable text strings displayed at run time with this spreadsheet-like table. Text File Changes View Configure search-and-replace actions for content in text files to be modified at run time. Virtual Machine Detection Block your installations from running on virtual machines. Unicode Support Improve multi-language installation development. Support for 64-Bit COM Extraction Extract COM data from a 64-bit COM server. Windows Installer Installation Chaining Add MSI packages to your main installation and chain them together. XML Support Save time by quickly testing XML configuration changes to installation projects. Billboard Support for Custom Branding Display Adobe Flash billboards and other graphic files during the install process. SaaS Support (IIS 7 and SSL Technologies) Easily deploy Windows-based Web applications. Project Assistant Jumpstart a project by using a simplified set of views. Support for Digital Signatures Save time by digitally signing all your files at build time. Easily Run Custom Actions Schedule a custom action to run at precisely the right moment in your installation. Installation Prerequisites Check for and install prerequisites before your installation is executed. To create a InstallShield project in Visual Studio and Build it using Team Build 2010, first you have to add the InstallShield Project template  to your Solution file. If you want to use InstallShield Limited edition you can add it from FileàNewà project àother Project Types àSetup and Deploymentà InstallShield LE and if you are using other versions of InstallShield, then you have to add it from  from FileàNewà project àInstallShield Projects. Here, I’m using  InstallShield 2011 Premier edition as I already have it Installed. I have created a simple package for TailSpin Application which has a Feature called Web, few components and a IIS Web Site for  TailSpin application.   Before started working on this, I thought I may need to build the package by calling invoke process activity in build process template or have to create a new custom activity. But, it got build without any changes to build process template. But, it was failing with below error message. C:\Program Files (x86)\MSBuild\InstallShield\2011\InstallShield.targets (68): The "InstallShield.Tasks.InstallShield" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\InstallShield\2010Limited\InstallShield.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files(x86)\MSBuild\InstallShield\2011\InstallShield.Tasks.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. This error is due to 64-bit build machine which I’m using. This issue will be replicable if you are queuing a build on a 64-bit build machine. To avoid this you have to ensure that you configured the build definition for your InstallShield project to load the InstallShield.Tasks.dll file (which is a 32-bit file); otherwise, you will encounter this build error informing you that the InstallShield.Tasks.dll file could not be loaded. To select the 32-bit version of MSBuild, click the Process tab of your build definition in Team Explorer. Then, under the Advanced node, find the MSBuild Platform setting, and select x86. Note that if you are using a 32-bit build machine, you can select either Auto or x86 for the MSBuild Platform setting.  Once I did above changes, the build got successful.

    Read the article

  • Keep Track of Your Tasks with toDoo

    - by Asian Angel
    A tasks list can be convenient but most times you can not include details for those tasks or have to have an online account to do so. If you want to keep your tasks list with you on your computer or laptop and be able to add plenty of details then you might want to look at toDoo. Note: Requires Adobe AIR (download link at bottom of article). toDoo in Action Once you have installed toDoo everything is rather straightforward for getting started. The first time that you start toDoo there will be a temporary “fill-in” for the “Subject & Details Areas”. Simply highlight over the temporary text and add your information. Notice that if desired you can easily set a custom date and time for your tasks right below the “Details Area”. Note: toDoo does not minimize to the “System Tray”. Once you have everything set all that you need to do is click on “add task”. Here was our first new task being viewed in the “toDoo Description Tab”. Time to add a second task…here you can see the drop-down calendar. You can scroll through and select a different month very easily…just click on the desired day and it will be automatically set. Adding our second task… If you need to edit any of the details for a particular task you can do so in the “Edit toDoo Tab”. This nice little app is convenient and easy to use. Conclusion ToDoo is a simple straightforward app that lets you keep track of your tasks list and relevant details without an online account (especially helpful if you are without a wireless connection at a given moment). If you are looking for more of a list approach that runs on your desktop, then check out our article on Doomi here. Links Download ToDoo at Softpedia Download ToDoo at Adobe Marketplace Download Adobe AIR Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageMake To-Do Bar in Outlook 2007 Show Only Today’s TasksAdd a non-Google Tasks List to ChromeKeep Track of Homework Assignments with SoshikuTrack the Amount of Time You Spend Online in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • SQL SERVER – Online Index Rebuilding Index Improvement in SQL Server 2012

    - by pinaldave
    Have you ever faced situation when you see something working and you feel it should not be working? Well, I had similar moments few days ago. I know that SQL Server 2008 supports online indexing. However, I also know that I cannot rebuild index ONLINE if I have used VARCHAR(MAX), NVARCHAR(MAX) or few other data types. While I held my belief very strongly I came across situation, where I had to go online and do little bit reading from Book Online. Here is the similar example. First of all – run following code in SQL Server 2008 or SQL Server 2008 R2. USE TempDB GO CREATE TABLE TestTable (ID INT, FirstCol NVARCHAR(10), SecondCol NVARCHAR(MAX)) GO CREATE CLUSTERED INDEX [IX_TestTable] ON TestTable (ID) GO CREATE NONCLUSTERED INDEX [IX_TestTable_Cols] ON TestTable (FirstCol) INCLUDE (SecondCol) GO USE [tempdb] GO ALTER INDEX [IX_TestTable_Cols] ON [dbo].[TestTable] REBUILD WITH (ONLINE = ON) GO DROP TABLE TestTable GO Now run the same code in SQL Server 2012 version. Observe the difference between both of the execution. You will be get following resultset. In SQL Server 2008/R2 it will throw following error: Msg 2725, Level 16, State 2, Line 1 An online operation cannot be performed for index ‘IX_TestTable_Cols’ because the index contains column ‘SecondCol’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline. In SQL Server 2012 it will run successfully and will not throw any error. Command(s) completed successfully. I always thought it will throw an error if there is VARCHAR(MAX) or NVARCHAR(MAX) used in table schema definition. When I saw this result it was clear to me that it will be for sure not bug enhancement in SQL Server 2012. For matter for the fact, I always wanted this feature to be added in SQL Server Engine as this will enable ONLINE Index Rebuilding for mission critical tables which needs to be always online. I quickly searched online and landed on Jacob Sebastian’s blog where he has blogged about it as well. Well, is there any other new feature in SQL Server 2012 which gave you good surprise? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Subterranean IL: Filter exception handlers

    - by Simon Cooper
    Filter handlers are the second type of exception handler that aren't accessible from C#. Unlike the other handler types, which have defined conditions for when the handlers execute, filter lets you use custom logic to determine whether the handler should be run. However, similar to a catch block, the filter block does not get run if control flow exits the block without throwing an exception. Introducing filter blocks An example of a filter block in IL is the following: .try { // try block } filter { // filter block endfilter }{ // filter handler } or, in v1 syntax, TryStart: // try block TryEnd: FilterStart: // filter block HandlerStart: // filter handler HandlerEnd: .try TryStart to TryEnd filter FilterStart handler HandlerStart to HandlerEnd In the v1 syntax there is no end label specified for the filter block. This is because the filter block must come immediately before the filter handler; the end of the filter block is the start of the filter handler. The filter block indicates to the CLR whether the filter handler should be executed using a boolean value on the stack when the endfilter instruction is run; true/non-zero if it is to be executed, false/zero if it isn't. At the start of the filter block, and the corresponding filter handler, a reference to the exception thrown is pushed onto the stack as a raw object (you have to manually cast to System.Exception). The allowed IL inside a filter block is tightly controlled; you aren't allowed branches outside the block, rethrow instructions, and other exception handling clauses. You can, however, use call and callvirt instructions to call other methods. Filter block logic To demonstrate filter block logic, in this example I'm filtering on whether there's a particular key in the Data dictionary of the thrown exception: .try { // try block } filter { // Filter starts with exception object on stack // C# code: ((Exception)e).Data.Contains("MyExceptionDataKey") // only execute handler if Contains returns true castclass [mscorlib]System.Exception callvirt instance class [mscorlib]System.Collections.IDictionary [mscorlib]System.Exception::get_Data() ldstr "MyExceptionDataKey" callvirt instance bool [mscorlib]System.Collections.IDictionary::Contains(object) endfilter }{ // filter handler // Also starts off with exception object on stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) } Conclusion Filter exception handlers are another exception handler type that isn't accessible from C#, however, just like fault handlers, the behaviour can be replicated using a normal catch block: try { // try block } catch (Exception e) { if (!FilterLogic(e)) throw; // handler logic } So, it's not that great a loss, but it's still annoying that this functionality isn't directly accessible. Well, every feature starts off with minus 100 points, so it's understandable why something like this didn't make it into the C# compiler ahead of a different feature.

    Read the article

  • Dell XPS 15 (L502x) sound problem

    - by lauramolenaar
    I have a problem with my sound on my Dell XPS 15. First, when I had Windows 7, my sound was pretty good (I have a JBL 2.1 speaker system with Waves Maxx audio), but since I installed Ubuntu 12.04 it sounded very cheap and as if I put my laptop in a tin can. I've already tried installing alsa-hda-dkms from the alsa-daily ppa (http://ppa.launchpad.net/ubuntu-audio-dev/alsa-daily) This is my audio controller: laura@laura-XPS-L502X:~$ lspci -v | grep -A7 -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Dell Device 04b6 Flags: bus master, fast devsel, latency 0, IRQ 51 Memory at f1c00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel I hope you can help me, and you can always ask me for more information. Result of aplay -l: **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC665 Analog [ALC665 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC665 Digital [ALC665 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Result of aplay -L: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • Oracle Fusion Middleware Innovation Award Winners 2012: ADF & Fusion Development

    - by Dana Singleterry
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Fusion Middleware Innovation Awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The awards were presented during Oracle OpenWorld 2012 and following winners are for the category of ADF & Fusion Development. Micros – an OPN Platinum partner – has been working closely with Oracle product management teams in applying industry best practices in the development of their solutions. Their current application suite for the hospitality industry was built on Oracle Forms and the Oracle database running on MS Windows. The next generation of this suite is being developed and released in modules that are now based on Oracle FMW (including ADF) 11g technologies and Oracle Database 11g all running on Oracle Linux. The primary driver was that of modernization and hence the reason Oracle ADF was selected to provide a rich UI for business processes that could be served up through traditional methods or through mobile devices globally. SOA Suite & ADF allowed for loosely-coupled services that could evolve with the needs of the business. Micros's application innovations includes the use of business application portlets that have been published from ADF Faces Task Flows generated using WebCenter portlet libraries  & Oracle Metadata Services (MDS) with multi-layered customizations using Oracle WebCenter Composer. PCS (Marfin Egnatia Bank of Greece) – PCS Wealth Management is a WM Software Solution, which captures and automates the WM business processes allowing Service Providers to allocate enough time and effort into Customer Service and Investment Strategies, under Advisory or Execution-Only Services. The Product is built upon the latest Web Technologies and ensures Best Practices covering all functional expectations, meeting local regulatory requirements and discovering successful opportunities for the WM Customers' Portfolios. The new unified Wealth Management system offers an unparalleled User Interface taking full advantage of the user friendly ADF Faces Components to a great extent, all serving Private Banking purposes. The application offers a true Account Officer Cockpit with shallow navigation, one-click access to informed decisions and a perfect customer service. ADF Grids and Pivots, the Data Visualization Components, as well as the Calendar and Map Components are cleverly used to help the user eliminate the usage of Excel, Outlook and other systems. PCS's application is unique in the way it leverages the ADF Faces data visualization components to create a truly attractive and insightful dashboard for their application. PCS Wealth Management Demo Qualcomm – Qualcomm, a $17B per year company, designs and sells semiconductor products for wireless telecommunications, mobile and computing markets. In addition, Qualcomm companies provide various hardware and software products to facilitate the design, development and deployment of phones and the applications that run on them. Qualcomm’s challenge has been to not only develop and deploy new business system functions to keep pace with customer demand, but also to provide a customer collaboration capability that is sufficiently robust, easy to use, and flexible to meet emerging and future needs. Qualcomm has taken successful steps in building and deploying the customer engagement platform Ieveraging various Oracle technologies including Fusion Middleware (ADF, SOA, OBIEE) and their proven ERP foundation of EBS and 11g databases. The new platform delivers a more unified and “seamless” business solution with a consistent, modern “look and feel” all based on standard business processes which facilitate efficient collaboration with Qualcomm and its customers. The look and feel leverages ADF in innovative ways and includes hover over navigation, custom pagination components, and skinning. Qualcomm has exposed a services layer that provides significant functionality including order-to-ship, quote-to-order, customer on-boarding and contract validation. Qualcomm's creative designs leverage Oracle's SOA Suite to integrate with Oracle EBS and desperate applications to provide a rich user interface through the use use of Oracle ADF Faces Rich Client Components providing a self-service solution to their customers.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04

    - by Asian Angel
    Is your computer or virtualization software unable to display the new 3D version of the Unity Interface in Ubuntu? Now you can access and enjoy the 2D version with just a little PPA magic added to your system! To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Unity 2D on the left (highlighted with red in the image). Scroll down until you find the listing for “Unity interface for non-accelerated graphics cards – unity-2d” and click Install. Once that is done you are ready to go to System, Administration, and then select Login Screen in your Ubuntu Menu. Unlock the screen and select Unity 2D as the default session from the drop-down list as shown here. Log out and then back in to start enjoying that Unity 2D goodness! Here is how things will look when you click on the Ubuntu Menu Icon. Select the category that you would like to start with (such as Web) and get ready to have fun. This definitely looks (and works) awesome! Enjoy your new Unity 2D Interface! Unity 2D Packaging PPA [Launchpad] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 1

    - by MarkPearl
    So I have been delving into F# more and more and thought the best way to learn the language is to write something useful. I have been meaning to get some more Silverlight knowledge (up to now I have mainly been doing WPF) so I came up with a really simple project that I can actually use at work. Simply put – I often get support calls from clients wanting new activation codes. One of our main app’s was written in VB6 and had its own “security” where it would require about a 45 character sequence for it to be activated. The catch being that each time you reopen the program it would require a different character sequence, which meant that when we activate clients systems we have to do it live! This involves us either referring them to a website, or reading the characters to them over the phone and since nobody in the office knows the IROA off by heart we would come up with some interesting words to represent characters… 9 times out of 10 the client would type in the wrong character and we would have to start all over again… with this app I am hoping to reduce the errors of reading characters over the phone by treating it like a ham radio. My “Silverlight” application will allow for the user to input a series of characters and the system will then generate the equivalent IROA words… very basic stuff e.g. Character Input – abc Words Generated – Alpha Bravo Charlie After listening to Anders Hejlsberg on Dot Net Rocks Show 541 he mentioned that he felt many applications could make use of F# but in an almost silo basis – meaning that you would write modules that leant themselves to Functional Programming in F# and then incorporate it into a solution where the front end may be in C# or where you would have some other sort of glue. I buy into this kind of approach, so in this project I will use F# to do my very intensive “Business Logic” and will use Silverlight/C# to do the front end. F# Business Layer I am no expert at this, so I am sure to get some feedback on way I could improve my algorithm. My approach was really simple. I would need a function that would convert a single character to a string – i.e. ‘A’ –> “Alpha” and then I would need a function that would take a string of characters, convert them into a sequence of characters, and then apply my converter to return a sequence of words… make sense? Lets start with the CharToString function let CharToString (element:char) = match element.ToString().ToLower() with | "1" -> "1" | "5" -> "5" | "9" -> "9" | "2" -> "2" | "6" -> "6" | "0" -> "0" | "3" -> "3" | "7" -> "7" | "4" -> "4" | "8" -> "8" | "a" -> "Alpha" | "b" -> "Bravo" | "c" -> "Charlie" | "d" -> "Delta" | "e" -> "Echo" | "f" -> "Foxtrot" | "g" -> "Golf" | "h" -> "Hotel" | "i" -> "India" | "j" -> "Juliet" | "k" -> "Kilo" | "l" -> "Lima" | "m" -> "Mike" | "n" -> "November" | "o" -> "Oscar" | "p" -> "Papa" | "q" -> "Quebec" | "r" -> "Romeo" | "s" -> "Sierra" | "t" -> "Tango" | "u" -> "Uniform" | "v" -> "Victor" | "w" -> "Whiskey" | "x" -> "XRay" | "y" -> "Yankee" | "z" -> "Zulu" | element -> "Unknown" Quite simple, an element is passed in, this element is them converted to a lowercase single character string and then matched up with the equivalent word. If by some chance a character is not recognized, “Unknown” will be returned… I know need a function that can take a string and can parse each character of the string and generate a new sequence with the converted words… let ConvertCharsToStrings (s:string) = s |> Seq.toArray |> Seq.map(fun elem -> CharToString(elem)) Here… the Seq.toArray converts the string to a sequence of characters. I then searched for some way to parse through every element in the sequence. Originally I tried Seq.iter, but I think my understanding of what iter does was incorrect. Eventually I found Seq.map, which applies a function to every element in a sequence and then creates a new collection with the adjusted processed element. It turned out to be exactly what I needed… To test that everything worked I created one more function that parsed through every element in a sequence and printed it. AT this point I realized the the Seq.iter would be ideal for this… So my testing code is below… let PrintStrings items = items |> Seq.iter(fun x -> Console.Write(x.ToString() + " ")) let newSeq = ConvertCharsToStrings("acdefg123") PrintStrings newSeq Console.ReadLine()   Pretty basic stuff I guess… I hope my approach was right? In Part 2 I will look into doing a simple Silverlight Frontend, referencing the projects together and deploying….

    Read the article

  • SQL SERVER – Monday Morning Puzzle – Query Returns Results Sometimes but Not Always

    - by pinaldave
    The amount of email I receive sometime it is impossible for me to answer every email. Nonetheless I try to answer pretty much every email I receive. However, quite often I receive such questions in email that I have no answer to them because either emails are not complete or they are out of my domain expertise. In recent times I received one email which had only one or two lines but indeed attracted my attention to it. The question was bit vague but it indeed made me think. The answer was not straightforward so I had to keep on writing the answer as I remember it. However, after writing the answer I do not feel satisfied. Let me put this question in front of you and see if we all can come up with a comprehensive answer. Question: I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Well, if you read the question, it is indeed incomplete and it does not contain much of the information at all. I decided to help him and here is the answer, which I started to compose. Answer: As there are not much information in the original question, I am not confident what will solve your problem. However, here are the few things which you can try to look at and see if that solves your problem. Check parameter which is passed to the query. Is the parameter changing at various executions? Check connection string – is there some kind of logic around it? Do you have a non-deterministic component in your query logic? (In other words – does your result is based on current date time or any other time based function?) Are you facing time out while running your query? Is there any error in error log? What is the business logic in your query? Do you have all the valid permissions to all the objects used in the query? Are permissions changing or query accessing a different object in various executions? (Add your suggestions here) Meanwhile, have you ever faced this situation? If yes, do share your experience in the comment area. I will send a copy of my book SQL Server Interview Questions and Answers to one of the most interesting comment. The winner will be announced by next Monday.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • mysql completely removing

    - by Dmitry Teplyakov
    I broke my mysql and now I want to completely reinstall it. I tried: $ sudo apt-get install --reinstall mysql-server $ sudo apt-get remove --purge mysql-client mysql-server But always I see popup with proposal to change root password, I change it and got an error that I can change it.. $ sudo apt-get remove --purge mysql-client mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed The following packages were automatically installed and are no longer required: libmygpo-qt1 libqtscript4-network libqtscript4-gui libtag-extras1 libqtscript4-sql libqtscript4-xml amarok-utils amarok-common libqtscript4-uitools liblastfm0 libloudmouth1-0 libqtscript4-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) ... 121114 19:04:03 [Note] Plugin 'FEDERATED' is disabled. 121114 19:04:03 InnoDB: The InnoDB memory heap is disabled 121114 19:04:03 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 19:04:03 InnoDB: Compressed tables use zlib 1.2.3.4 121114 19:04:03 InnoDB: Initializing buffer pool, size = 128.0M 121114 19:04:03 InnoDB: Completed initialization of buffer pool InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 0 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 640 pages, max 0 (relevant if non-zero) pages! 121114 19:04:03 InnoDB: Could not open or create data files. 121114 19:04:03 InnoDB: If you tried to add new data files, and it failed here, 121114 19:04:03 InnoDB: you should now edit innodb_data_file_path in my.cnf back 121114 19:04:03 InnoDB: to what it was, and remove the new ibdata files InnoDB created 121114 19:04:03 InnoDB: in this failed attempt. InnoDB only wrote those files full of 121114 19:04:03 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 121114 19:04:03 InnoDB: remove old data files which contain your precious data! 121114 19:04:03 [ERROR] Plugin 'InnoDB' init function returned error. 121114 19:04:03 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121114 19:04:03 [ERROR] Unknown/unsupported storage engine: InnoDB 121114 19:04:03 [ERROR] Aborting 121114 19:04:03 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) It is good for me that I have not any important databases, but..

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >