Search Results

Search found 2680 results on 108 pages for 'channel chief'.

Page 83/108 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • WebLogic Partner Community Newsletter May 2014

    - by JuergenKress
    Dear WebLogic Partner Community member, Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! Thanks to you our WebLogic Specialized Partners Oracle is #1 for Worldwide Market-Share Total Software Revenue in the Application Platform Market Segment for 2013. Want to know why, get the new recipes for Oracle WebLogic 12.1.2. Looking for the right server to run WebLogic – try WebLogic on Oracle Database Appliance 2.9. Want to install WebLogic - Play around with WebLogic Maven Plug-In. Thanks for sharing all the additional WebLogic articles within the community: How to use NodeManager to control WebLogic Servers & Retrieving WebLogic Server Name and Port in ADF Application & Glassfish to WebLogic Migration & Advanced GPIO & Building Robots with Java Embedded & Quick & Dirty How-to Guide: Install GlassFish 4 on Raspberry Pi & New Release: Java Micro Edition (ME) 8. In our Development tool section Frank published Development - Performance and Tuning - Overview in the latest ADF Architecture TV channel. Many of our clients run forms applications, make sure you run it on WebLogic. Thanks for sharing all the additional development tool articles within the community: Using Oracle WebLogic 12c with NetBeans IDE & Consuming SOAP Service & Check Box Support in ADF Query & New release of the ADF EMG Audit Rules & Working with the Array Data Type in a Table & ADF client-side architecture - Select All & Book Review: NetBeans Platform for Beginners See you in Lisbon! To read the complete newsletter please visit http://tinyurl.com/WebLogicNewsMay2014 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Problem upgrading 11.04

    - by Krazy_Kaos
    I've been trying to upgrade my ubuntu 11.04 desktop computer, but when I click on the ugrade button: I get this error: I've tryied to change my repositories, but it changes nothing in the error((on the "setting new software channel"). Can someone point me in the right direction? This is my sources.list: # deb http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb-src http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb cdrom:[Ubuntu 9.04 _Jaunty Jackalope_ - Release i386 (20090421.3)]/ jaunty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty main restricted multiverse universe ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty-updates main restricted multiverse universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb-src http://pt.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu natty partner deb-src http://archive.canonical.com/ubuntu natty partner deb http://us.archive.ubuntu.com/ubuntu/ natty-security main restricted multiverse universe deb http://us.archive.ubuntu.com/ubuntu/ natty-proposed restricted main multiverse universe # deb http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick # deb-src http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick deb http://extras.ubuntu.com/ubuntu natty main #Third party developers repository

    Read the article

  • Survey: Your Plans for Adopting New Firefox Releases?

    - by Steven Chan (Oracle Development)
    Mozilla is committing to releasing new Firefox versions every six weeks.  Mozilla released Firefox 5 this week.  With this release, Mozilla states that Firefox 4 is End-of-Life and will not receive any additional security updates.  In a comment thread posted on to a Mike Kaply's blog article discussing these new Firefox policies, Asa Dotzler from Mozilla stated: ... Enterprise has never been (and I’ll argue, shouldn’t be) a focus of ours. Until we run out of people who don’t have sysadmins and enterprise deployment teams looking out for them, I can’t imagine why we’d focus at all on the kinds of environments you care so much about.  In a later comment, he added: ... A minute spent making a corporate user happy can better be spent making many regular users happy. I’d much rather Mozilla spending its limited resources looking out for the billions of users that don’t have enterprise support systems already taking care of them. Asa then confirmed that every new Firefox release will put the previous one into End-of-Life: As for John’s concern, “By the time I validate Firefox 5, what guarantee would I have that Firefox 5 won’t go EOL when Firefox 6 is released?” He has the opposite of guarantees that won’t happen. He has my promise that it will happen. Firefox 6 will be the EOL of Firefox 5. And Firefox 7 will be the EOL for Firefox 6.  He added: “You’re basically saying you don’t care about corporations.” Yes, I’m basically saying that I don’t care about making Firefox enterprise friendly. Kev Needham, Channel Manager at Mozilla later stated to PC Mag: The Web and Web browsers continue to evolve rapidly. Mozilla's focus is on providing users with the best Web experience possible, and Firefox needs to evolve at the pace the Web's users and developers expect. By releasing small, focused updates more often, we are able to deliver improved security and stability even as we introduce new features, which is better for our users, and for the Web.We recognize that this shift may not be compatible with a large organization's IT Policy and understand that it is challenging to organizations that have effort-intensive certification polices. However, our development process is geared toward delivering products that support the Web as it is today, while innovating and building future Web capabilities. Tying Firefox product development to an organizational process we do not control would make it difficult for us to continue to innovate for our users and the betterment of the Web.  Your feedback needed for E-Business Suite certifications  Mozilla's new support policy has significant implications for enterprise users of Firefox with Oracle E-Business Suite.  We are reviewing the implications for our certification and support policies for Firefox now.  It would be very helpful if you could let me know about your organisation's plans for Firefox in light of this new information.  Please feel free to drop me a private email, or post a comment here if that's appropriate. 

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Oracle OpenWorld - Events of Interest

    - by Larry Wake
    I mentioned the "Focus On Oracle Solaris" document the other day, which lists many of the Solaris-related events at Oracle OpenWorld this year; today I thought I'd highlight a few sessions you might find interesting. Monday, October 1st: 4:45 PM - Get Proactive: Best Practices for Maintaining and Upgrading Oracle Solaris (Moscone South 252) This session covers best practices for upgrading and patching and how to take advantage of unique technologies in Oracle Solaris 10 and 11. Learn how to get maximum value from My Oracle Support for both reactive and proactive requirements. Understand the benefits of secure remote access and how Oracle Support experts use collaborative shared sessions combined with Oracle Solaris technologies such as DTrace. Tuesday, October 2nd: 10:15 AM -  How to Increase Performance and Agility with an Open Data Center Fabric (Moscone South 200) If you haven't had a chance to hear about Xsigo Systems, this is a golden opportunity while you're at OpenWorld. Now part of Oracle, Xsigo's network virtualization technology is designed to increase both application performance and management efficiency, through a combination of software-defined network technology and the industry’s fastest fabric, allowing data center to converge Ethernet and Fibre Channel connectivity to a single fabric, to reduce complexity by 70 percent and CapEx by 50 percent while providing more I/O bandwidth to your applications. Wednesday, October 3rd: 10:15 AM - General Session: Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap (Moscone South 103) Markus Flierl, head of Oracle Solaris Core Engineering, will outline the strategy and roadmap for Oracle Solaris,  how Oracle Solaris 11 is being deployed in cloud computing and the unique optimizations in Oracle Solaris 11 for the Oracle stack. The session also offers a sneak peek at the latest technology under development in Oracle Solaris, and what customers can expect to see in the coming updates. Plus, there are several Hands-On Labs: Monday, October 1st: 10:45 AM - 11:45 AM - Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications (Marriott Marquis - Salon 14/15) 4:45 PM - 5:45 PM - Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11  (Marriott Marquis - Salon 14/15) Tuesday, October 2nd: 1:15 PM - 2:15 PM - Virtualizing Your Oracle Solaris 11 Environment  (Marriott Marquis - Salon 10/11) Wednesday, October 3rd: 3:30 PM - 4:30 PM - Large-Scale Installation and Deployment of Oracle Solaris 11 (Marriott Marquis - Salon 14/15) There's plenty more--see the "Focus On Oracle Solaris" guide. See you next week in San Francisco!

    Read the article

  • 5.1 surround sound

    - by rocker9455
    Ok, So i've always had trouble with enabling 5.1 in ubuntu. Running 'alsamixer': I have: Master, Heaphones, PCM, Front, Front Mi, Front Mi, Surround, Center All are at 100% Card:HDA Intel Chip:Realtek ALC888 (This is my onboard sound, Its a dell studio, with 7.1 integrated sound) Running "speaker-test -c6 -twav" I only get the front 2 speakers (Right/Left) making any noise. The others make no noise at all. I have no other sound card to use as all my PCI slots are used up. Daemon.conf: ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10

    Read the article

  • Dynamic Bind9 + DHCP

    - by AcidRod75
    i have been working on setup a server for my internal network, so far i have a working isc-dhcp-server that can upgrade a chrooted BIND9 (on the same machine), i need to add some static entries on the DNS, so users can resolve the websites that resides in our DMZ. What i had tryed all ready was to modify the /etc/bind/named.conf.local with this info: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization //include "/etc/bind/zones.rfc1918"; key DHCP_UPDATER { algorithm HMAC-MD5.SIG-ALG.REG.INT; secret "MySuperSecretHash"; (this is not the real value BTW) }; zone "quality.internal" IN { type master; file "/var/lib/bind/quality.internal.db"; allow-update { key DHCP_UPDATER; }; }; zone "0.10.10.in-addr.arpa" { type master; file "/var/lib/bind/rev.10.10.0.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; }; logging { channel query.log { file "/var/log/named/query.log"; severity debug 3; }; category queries { query.log; }; }; --- EOF ---- then i added this 2 entries: zone "ourserver.internal" IN { type master; file "/var/lib/bind/ourserver.internal.db"; }; zone "0.16.172.in-addr.arpa" { type master; file "/var/lib/bind/rev.172.16.0.in-addr.arpa"; }; ---- EOF ---- So.. i created the files ourserver.internal.db and rev.172.16.0.in-addr.arpa placed them BOTH in /var/lib/bind/ and changed the permisions so the bind user can access them, restated the service... when i do a NSLOOKUP www.ourserver.internal i get: Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find www.ourserver.internal: NXDOMAIN BUT when i do a reverse lookup.... Server: 127.0.0.1 Address: 127.0.0.1#53 5.0.16.172.in-addr.arpa name = www.ourserver.internal I do not understand what's wrong. Some help with this will save me from installing a new DNS server at the DMZ JUST to host internal site names- TY in advance BTW: the server i'm using has Ubuntu Server 11.10 fully patched.

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • Why are my scene's depth values not being written to my DepthStencilView?

    - by dotminic
    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything. I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1. I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled. The device is created with the appropriate debug flags; #if defined(DEBUG) || defined(_DEBUG) deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL; #endif This is how I create my depth map. I've omitted error checking for the sake of brevity D3D11_TEXTURE2D_DESC td; td.Width = width; td.Height = height; td.MipLevels = 1; td.ArraySize = 1; td.Format = DXGI_FORMAT_R32_TYPELESS; td.SampleDesc.Count = 1; td.SampleDesc.Quality = 0; td.Usage = D3D11_USAGE_DEFAULT; td.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; td.CPUAccessFlags = 0; td.MiscFlags = 0; _device->CreateTexture2D(&texDesc, 0, &this->_depthMap); D3D11_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D; dsvd.Texture2D.MipSlice = 0; _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV); D3D11_SHADER_RESOURCE_VIEW_DESC srvd; srvd.Format = DXGI_FORMAT_R32_FLOAT; srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvd.Texture2D.MipLevels = texDesc.MipLevels; srvd.Texture2D.MostDetailedMip = 0; _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);

    Read the article

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Ajax problem after deleting a div, then creating a new one

    - by Matt Nathanson
    I'm building a custom CMS where you can add and delete clients using ajax and jquery 1.4.2. My problem lies after I delete a div. The ajax is used to complete this and refresh automatically.. But when I go to create a new div (without a hard refresh) it puts it back in the slot of the div I just deleted. How can I get this to completely forget about the div i just deleted and place the new div in the next database table? link for reference: http://staging.sneakattackmedia.com/cms/ //Add New client // function AddNewClient() { dataToLoad = 'addClient=yes'; $.ajax({ type: 'post', url: '/clients/controller.php', datatype: 'html', data: dataToLoad, target: ('#clientssidebar'), async: false, success: function(html){ $(this).click(function() {reInitialize()}); //$('#clientssidebar').html(html); $('div#' + clientID).slideDown(800); $(this).click(function() { ExpandSidebar()});}, error: function() { alert('An error occured! 222');} });}; //Delete Client // function DeleteClient(){ var yes = confirm("Whoa there chief! Do you really want to DELETE this client?"); if (yes == 1) { dataToLoad = 'clientID=' + clientID + '&deleteClient=yes', $.ajax({ type: 'post', url: '/clients/controller.php', datatype: 'html', data: dataToLoad, success: function(html) { alert('Client' + clientID + ' should have been deleted from the database.'); $(this).click(function() {reInitialize()}); $('div#' +clientID).slideUp(800); }, error: function() { alert('error'); }});};}; //Re Initialize // function reInitialize() { $('#addnew').click(function() {AddNewClient()}); $('.deletebutton').click(function() {clientID = $(this).parent().attr('id'); DeleteClient()}) $('.clientblock').click(function() {clientID = $(this).attr('id'); ExpandSidebar()});}; //Document Ready // $(document).ready(function(){ if ($('isCMS')){ editCMS = 1; $('.deletebutton').click(function() {clientID = $(this).parent().attr('id'); DeleteClient()}); $('#addnew').click(function() {AddNewClient()}); $('.clientblock').click(function() {clientID = $(this).attr('id'); ExpandSidebar()}); $('.clientblock').click(function() {if (clickClient ==true) { $(this).css('background-image', 'url(/images/highlightclient.png)'); $(this).css('margin-left' , '30px'); }; $(this).click(function(){ $(this).css('background-image', ''); }); $('.uploadbutton').click(function(){UploadThings()}); }); } else ($('#clientscontainer')) { $('#editbutton').css('display', 'none'); }; }); Please help!!!

    Read the article

  • Methodology for a Rails app

    - by Aaron Vegh
    I'm undertaking a rather large conversion from a legacy database-driven Windows app to a Rails app. Because of the large number of forms and database tables involved, I want to make sure I've got the right methodology before getting too far. My chief concern is minimizing the amount of code I have to write. There are many models that interact together, and I want to make sure I'm using them correctly. Here's a simplified set of models: class Patient < ActiveRecord::Base has_many :PatientAddresses has_many :PatientFileStatuses end class PatientAddress < ActiveRecord::Base belongs_to :Patient end class PatientFileStatus < ActiveRecord::Base belongs_to :Patient end The controller determines if there's a Patient selected; everything else is based on that. In the view, I will be needing data from each of these models. But it seems like I have to write an instance variable in my controller for every attribute that I want to use. So I start writing code like this: @patient = Patient.find(session[:patient]) @patient_addresses = @patient.PatientAddresses @patient_file_statuses = @patient.PatientFileStatuses @enrollment_received_when = @patient_file_statuses[0].EnrollmentReceivedWhen @consent_received = @patient_file_statuses[0].ConsentReceived @consent_received_when = @patient_file_statuses[0].ConsentReceivedWhen The first three lines grab the Patient model and its relations. The next three lines are examples of my providing values to the view from one of those relations. The view has a combination of text fields and select fields to show the data above. For example: <%= select("patientfilestatus", "ConsentReceived", {"val1"="val1", "val2"="val2", "Written"="Written"}, :include_blank=true )% <%= calendar_date_select_tag "patient_file_statuses[EnrollmentReceivedWhen]", @enrollment_complete_when, :popup=:force % (BTW, the select tag isn't really working; I think I have to use collection_select?) My questions are: Do I have to manually declare the value of every instance variable in the controller, or can/should I do it within the view? What is the proper technique for displaying a select tag for data that's not the primary model? When I go to save changes to this form, will I have to manually pick out the attributes for each model and save them individually? Or is there a way to name the fields such that ActiveRecord does the right thing? Thanks in advance, Aaron.

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' undefined symbol: ZVAL_DELREF

    - by crmpicco
    I have an issue where I am unable to use JSON, which would appear to be because of the following error. There is another thread on this forum this touches on a similar issue, but it's not quite the same. I am using CentOS 5.6 and have the following pear packages installed: [crmpicco@eq-www-php53 ~]$ pear list PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' - /usr/lib64/php/modules/json.so: undefined symbol: ZVAL_DELREF in Unknown on line 0 Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.7 stable Auth_SASL 1.0.2 stable Console_Getopt 1.3.1 stable Image_Barcode 1.1.2 stable Mail 1.1.14 stable Net_SMTP 1.2.10 stable Net_Socket 1.0.8 stable PEAR 1.9.4 stable Structures_Graph 1.0.4 stable XML_RPC 1.5.4 stable XML_Util 1.2.1 stable json 1.2.1 stable and have the following PHP packages installed: [crmpicco@eq-www-php53 ~]$ yum list installed | grep php php.x86_64 5.3.10-1.w5 installed php-cli.x86_64 5.3.10-1.w5 installed php-common.x86_64 5.3.10-1.w5 installed php-devel.x86_64 5.3.10-1.w5 installed php-gd.x86_64 5.3.10-1.w5 installed php-ldap.x86_64 5.3.10-1.w5 installed php-mcrypt.x86_64 5.3.10-1.w5 installed php-mysql.x86_64 5.3.10-1.w5 installed php-pdo.x86_64 5.3.10-1.w5 installed php-pear.noarch 1:1.9.4-1.w5 installed php-pear-Net-Socket.noarch 1.0.8-1.el5.centos installed php-soap.x86_64 5.3.10-1.w5 installed php-xml.x86_64 5.3.10-1.w5 installed The error: [crmpicco@eq-www-php53 ~]$ php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' - /usr/lib64/php/modules/json.so: undefined symbol: ZVAL_DELREF in Unknown on line 0 PHP 5.3.10 (cli) (built: Feb 2 2012 23:23:12) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies My repolist reads as: [crmpicco@eq-www-php53 ~]$ yum repolist Loaded plugins: changelog, fastestmirror Excluding Packages in global exclude list Finished repo id repo name status base CentOS-5 - Base 3,548+43 epel Extra Packages for Enterprise Linux 5 - x86_64 6,815+156 extras CentOS-5 - Extras 245+23 rpmforge Red Hat Enterprise 5 - RPMforge.net - dag 11,016+67 updates CentOS-5 - Updates 233 webtatic Webtatic Repository 5 - x86_64 211+183 repolist: 22,068 I am getting HTTP 500 errors everywhere that I use JSON so my application is non functional right now.

    Read the article

  • ssh + tinyproxy: poor performance

    - by Paul
    I am currently in China and I would like to still visit some blocked websites (facebook, youtube). I have VPS in the USA and I have installed tinyproxy on it. I log in on my VPS with SSH port-forwarding and I have configured my browser appropriately. Everything works more or less: I can surf to those websites but everything is inusually slow and sometimes data transfer stops abruptly. This probably has to do with the fact that I see some errors in my shell on the VPS like : channel 6: open failed: connect failed: Also in the log-file of tinyproxy I see some bad things: ERROR Sep 06 14:52:14 [28150]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:15 [28153]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28168]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28151]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28143]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:17 [28147]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:23 [28137]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:26 [28168]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:27 [28186]: read_request_line: Client (file descriptor: 7) closed socket before read. ERROR Sep 06 14:52:31 [28160]: getpeer_information: getpeername() error: Transport endpoint is not connected

    Read the article

  • BIOS interrupts, privilege levels and paging

    - by Jack
    Hi, I was learning about Intel 8086-80486 CPUs and their interactions with HW. But I still don´t understand it quite well. Please, help me fill blank spots. First, I know that CPU communicates with HW using BIOS interrupts. But, what really happens in PC, when I call some INT instruction? I know that according the interrupt table some instructions begin to execute, but how by executing some instructions can BIOS recognize what I want to do? Becouse as far as I know, CPU has no extra communication channel with BIOS, it can only adress memory and receive data. So how can I instruct BIOS to do something, when I can only address RAM? Next thing I don't understand is about privilege levels. I know about ring model, and access rights, but how does the CPU know which privilege level has executed an instruction? I think that these privileges apply only when intruction is trying to address memory, but how does an application get its privilege level? I mean I know its level 3, but how is it set? And last thing, I know that paging is address scheme that is used to support aplication-transparent virtual memory, or swapping, but I could not find any information about how paging is tied with protected mode. Like if paging is like next mode independent of protected mode, or its somehow implemented within protected mode. And if it is implemented in protected mode, isn´t it too slow, to first address application space, then offset, and then paging folder, page and offset once again?

    Read the article

  • Connecting Adium to Google Talk with a 2-factor authentication account isn’t working

    - by Robin
    Anyone else having this problem? After turning on 2-factor authentication on my Google Account I stopped being able to log in through Adium (Mac IM client that uses Pidgin’s libpurple for IM). Obviously you need to generate an application-specific password but these won’t let me log in. Application specific passwords work with other applications (e.g. Reeder for feeds and calendering on my phone). Google specifically mention Adium in their examples of setting up an application password for Google Talk so I doubt it’s a generic Adium problem. I can still access Google Talk for this account if I use a talk widget on a Google Website (Plus, or iGoogle for example). My bug report to Adium including a connection log file is up on their Trac: http://trac.adium.im/ticket/15310 . No activity there though. I also asked around in their IRC channel but no-one else could replicate the problem. If I had to guess then I’d think it was a consequence of me not having a GMail account associated with my Google account. I don’t see exactly why that would cause it, but it seems like a fairly unusual setup that might not have been tested for.

    Read the article

  • blu-ray archiving in vmware ESXi 4

    - by spacecadet77
    Hi, I need some advice about using blu-ray writer for archiving data on vmware ESXi 4. At office we have IBM System x3400 Tower server with ESXi 4 hipervisor and OpenSuse and CentOS GNU/Linux system as guests. Will blu-ray writer work in this setup, and if it will is there any particular model you can suggest. Best regards IBM System x3400 Tower server specification: 1x Intel Quad-Core Xeon E5410 2.33GHz/ 12MB/ 1333MHz (2x CPU max) Intel 5000P chipset, 2x 1GB PC2-5300 DDR2 667MHz SDRAM ECC Chipkill (32GB max) 2x4GB (2x2GB) PC2-5300 CL5 ECC DDR2 FBDIMM (x3400, x3550, x3650) SAS/SATA Hot-Swap Open Bay (0xHDD std, 4xHDD max, 8xHDD optional) ServeRAID 8K dual channel SAS/SATA controller (RAID 0,1,1E,10,5,6, 256MB, Battery Backup) Graphics ATI® RN50(ES1000) 16MB DDR, CD-RW/DVD Combo no FDD GigaEthernet, Tower with Power Supply 835W (opt Redudant) Slot 1: half-length, PCI-Express x8(x4 electrical) Slot 2: full, PCI-Express x8 Slot 3: full, PCI-Express x8 Slot 4: full, 64-bit 133MHz 3.3v PCI-X Slot 5: full, 64-bit 133MHz 3.3v PCI-X , Slot 6: half-length, 32-bit 33MHz 5.0v PCI ports: 4x USB (Vers 2.0), 2x PS/2, parallel, 2x serial (9-pin), VGA, RJ-45 (ethernet ), RJ-45 (sys mgm) HDD 4 x TB 7200rpm / Serial ATA II 3.0Gb/s / 16MB, RoHS

    Read the article

  • BIOS interrupts, priviledge levels and paging

    - by Jack
    Hi, I was learning about Intel 8086-80486 CPUs and their interactions with HW. But I still don´t understand it quite well. Please, help me fill blank spots. First, I know that CPU communicates with HW using BIOS interrupts. But, what really happens in PC, when I call some INT instruction? I know that according the interrupt table some instructions begin to execute, but how by executing some instructions can BIOS recognize what I want to do? Becouse as far as I know, CPU has no extra communication channel with BIOS, it can only adress memory and receive data. So how can I instruct BIOS to do something, when I can only adress RAM? Next thing I dont understand is about priviledge levels. I know about ring model, and acess rights, but how CPU knows which priviledge level has executed instruction? I think that these priviledges apply only when intruction is trying to adress memory, but how applications gets its priviledge level? I mean I know its level 3, but how its set? And last thing, I know that paging is adress scheme that is used to support aplication-transparent virtual memory, or swaping, but I could not find any informations about how is paging tied with protected mode. Like if paging is like next mode independent of protectet mode, or its somehow implemented within protected mode. And if it is implemented in protected mode, isn´t it too slow, to first adress application space, than offset, and than paging folder, page and offset once again? Thank you for every response.

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Problems configuring an SSH tunnel to a Nexentastor appliance for use with headless Crashplan

    - by Rob Smallshire
    Problem I am attempting to configure an SSH tunnel to a NexentaStor appliance from either a Windows or Linux computer so that I can connect a Crashplan Desktop GUI to a headless Crashplan server running on the Nexenta box, according to these instructions on the Crashplan support site: Connect to a Headless CrashPlan Desktop. So far, I've failed to get a working SSH tunnel from from either either a Windows client (using Putty) or a Linux client (using command line SSH). I'm fairly sure the problem is at the receiving end with NexentaStor. A blog article - CrashPlan for Backup on Nexenta - indicates that it could be made to work only after "after enabling TCP forwarding in Nexenta in /etc/ssh/sshd_config" - although I'm not sure how to go about that or specifically what I need to do. Things I have tried Ensuring the Crashplan server on the Nexenta box is listening on port 4243 $ netstat -na | grep LISTEN | grep 42 127.0.0.1.4243 *.* 0 0 131072 0 LISTEN *.4242 *.* 0 0 65928 0 LISTEN Establishing a tunnel from a Linux host: $ ssh -L 4200:localhost:4243 admin:10.0.0.56 and then, from another terminal on the Linux host, using telnet to verify the tunnel: $ telnet localhost 4200 Trying ::1... Connected to localhost. Escape character is #^]'. with nothing more, although the Crashplan server should respond with something. From Windows, using PuTTY have followed the instructions on the Crashplan support site to establish an equivalent tunnel, but then telnet on Windows gives me no response at all and the Crashplan GUI can't connect either. The PuTTY log for the tunnelled connection shows reasonable output: ... 2011-11-18 21:09:57 Opened channel for session 2011-11-18 21:09:57 Local port 4200 forwarding to localhost:4243 2011-11-18 21:09:57 Allocated pty (ospeed 38400bps, ispeed 38400bps) 2011-11-18 21:09:57 Started a shell/command 2011-11-18 21:10:09 Opening forwarded connection to localhost:4243 but the telnet localhost 4200 command from Windows does nothing at all - it just waits with a blank terminal. On the NexentaStor server I've examined the /etc/ssh/sshd_config file and everything seems 'normal' - and I've commented out the ListenAddress entries to ensure that I'm listening on all interfaces. How can I establish a tunnel, and how can I verify that it is working?

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >