Search Results

Search found 3308 results on 133 pages for 'thomas smart'.

Page 121/133 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • CodePlex Daily Summary for Monday, October 01, 2012

    CodePlex Daily Summary for Monday, October 01, 2012Popular ReleasesD3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.2.1001.1): Visual Ribbon Editor 1.2.1001.1 What's New: Ability to hide currently selected tab Password in the connection file is now being encrypted Esc and Enter keyboard keys are now supported in all dialogs Extended width of Prefix and Id fields in New Button and New Group dialogs Minor UI enchancements for Hide Button button Minor UI enchancements for View XML button and dialog File/assembly versioning for executable file Notes: Existing plain-text password will be automatically encrypt...Untangler: Untangler: First version of Untangler application available now.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Mugen Injection: Mugen Injection 3.0: Added a generic version of the fluent syntax. Big changes in fluent-syntax. Added support to resolve a parameters: - Factory: AnyCustomFunc{T}, AnyCustomFunc{IEnumerable<IInjectionParameters>,T}, AnyCustomFunc{IDictionary<string, object>,T}, AnyCustomFunc{IEnumerable<IInjectionParameters>,IDictionary<string, object>,T} - Lazy: AnyCustomLazy<T> - Binding metadata: ISetting Change binding builders, now you can configure them through the string settings. Increase performance. F...Aggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Tube++: Tube++ 4.0.0.43: Fix parser for downloading and add new default slate gray styleReadable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryCatchThatException: Release 1.0: This is the first relase for CatchThatException library at 29-9-2012CardPlay: a Solitaire Framework for .Net: 0.3.0: Added 5 games: Blockade, Chessboard, Colorado, Sly Fox, TwentyUltraFluid Modeling Suite: Beta 1: This release is experimental but contains a lot of features like: - xml serialization - multiple selection - clipboard copy/cut/paste on context menu There is the debug and the release (recommanded default) versions.PDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...sosoft: sosoft source: sosoft source include alarm clockRawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.New ProjectsAmazonGlacierGUI - GUI Client for Amazon Glacier | WPF: a GUI client for amazon glacier , Written in .NET 4, C#,WPF,MVVM & RavenDB for storageAthene: UML editor.Basic Helpdesk Application: Develop a simple application to be used in a network environment to aid support staff while working remotely with users to resolve problems in their local PC.CAPTCHA Solver: CSolver, is a CAPTCHA Solver for Simple CAPTCHAs.ComicExt: ComicExt Works with CBZ files decompress them into a folder of the same name and convert them into zip files and vice versa from a single folder.Copy Your Table Storage: Tool to permit copy table storage dotCMS: it is a content management system base on BlogEngine.NET and Ext.NETihongma??: ?????????????????????????,????????????????、??????,?????????????????????????????,??????????????????????。 ??????????????,??????;???????????????,???????Liam F: Basic mathematical functions implemented using AVX/AVX2MVC Solution: MVC Solution, including many mvc best practicesopengltest: opengl testsample project: Testing...Sériethèque: A small tvshow organizer with viewed option, link to addict7ed.com, rename file etc ...SHOIC (SUPER HIGH ORBIT ION CANION): SHOIC (Super Hight Orbit Ion Canion) es una aplicación derivada de LOIC (Low Orbit Ion Canion)SignalRServerTry: signal r trySilverlight Datagrid Plus: Silverlight Datagrid plus adds basic features to the Silverlight toolkit datagrid such as grouping, filtering, new record and formatting of the datagrid.Sitecore Modules: This is a home for all the Sitecore Shared Source modules distributed by 5 Limes. Over time we expect this solution to contain several projects.Spiritual Chanting Utility: A quick media loop utilitySQL Web Studio Pad: A Simple Web Based SQL Management IDE for managing Database objectsStockmarket Simulation: Stockmarket Simulationtest_proj: ok testTISSEAN.NET: This project is based on: TISEAN 3.0.1 Nonlinear Time Series Analysis Rainer Hegger,Holger Kantz, Thomas Schreiber With major contributions by Eckehard OlbrichUntangler: Class hierarchy viewer for .Net assemblyValarmathi Hospital: Patient management.Windows 8 App For SharePoint Online: Windows 8 app to SharePoint 2013 OnlineWPF Anti-DataBinding Anti-DataTemplate Extensions: WPF Anti-DataBinding Anti-DataTemplate Extensions offers a pure C# code-behind (no XAML) approach to working with WPF controls, starting with the WPF ListBox.Yandex Positions Parser: ?????? ??????? ????? ? ?????? ??????? ?? ???????? ???????? ??????.YunCMS: A simple CMS for build protal site.

    Read the article

  • Data Integration 12c Raising the Big Data Roof at Oracle OpenWorld

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Author: Dain Hansen, Director, Oracle It was an exciting OpenWorld 2013 for us in the Data Integration track. Our theme this year was all about ‘being future ready’ - previewing one of our biggest releases this year: Oracle Data Integration 12c. Just this week we followed up with this preview by announcing the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} Mark Hurd's keynote on day one set the tone for the Data Integration sessions. Mark focused on big data analytics and the changing consumer expectations. Especially real-time insight is a key theme for Oracle overall and data integration products. In Mark Hurd's keynote we heard from key customers, such as Airbus and Thomson Reuters, how real-time analysis of operational data including machine data creates value, in some cases even saves lives. Thomas Kurian gave a deeper look into Oracle's big data and fast data solutions. In the initial lead Data Integration track session - Brad Adelberg, VP of Development, presented Oracle’s Data Integration 12c product strategy based on key trends from the initial OpenWorld keynotes. Brad talked about how Oracle's data integration products address the new data integration requirements that evolved with cloud computing, big data, and changing consumer expectations and how they set the key themes in our products’ road map. Brad explained why and how fast-time to value, high-performance and future-ready solutions is the top focus areas for product development. If you were not able to attend OpenWorld or this session I recommend reading the white paper: Five New Data Integration Requirements and How to Meet them with Oracle Data Integration, which provides an in-depth look into how Oracle addresses the new trends in the DI market. Following Brad’s session, Nick Wagner provided in depth review of Oracle GoldenGate’s latest features and roadmap. Nick discussed how Oracle GoldenGate’s tight integration with Oracle Database sets the product apart from the competition. We also heard that heterogeneity of the product is still a major focus for GoldenGate’s development and there will be more news on that front when there is a major release. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif"; mso-fareast-font-family:"MS Mincho";} After GoldenGate’s product strategy session, Denis Gray from the PM team presented Oracle Data Integrator’s product strategy session, talking about the latest and greatest on ODI. Another good session was delivered by long-time GoldenGate users, Comcast.  Jason Hurd and Amit Patel of Comcast talked about the various use cases they deploy Oracle GoldenGate throughout their enterprise, from database upgrades, feeding reporting systems, to active-active database synchronization.  The Comcast team shared many good tips on how to use GoldenGate for both zero downtime upgrades and active-active replication with conflict management requirement. One of our other important goals we had this year for the Data Integration track at OpenWorld was hearing from our customers. We ended day 1 on just that, with a wonderful award ceremony for Oracle Excellence Awards for Oracle Fusion Middleware Innovation. The ceremony was held in the Yerba Buena Center for the Arts. Congratulations to Royal Bank of Scotland and Yalumba Wine Company, the winners in the Data Integration category. You can find more information on the award and the winners in our previous blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… Selected for their innovation use of Oracle’s Data Integration products; the winners for the Data Integration Category are Royal Bank of Scotland and The Yalumba Wine Company. Congratulations!!! Royal Bank of Scotland’s Market and International Banking division provides clients across the globe with seamless trading and competitive pricing, underpinned by a deep knowledge of risk management across the full spectrum of financial products. They handle millions of transactions daily to keep the lifeblood of their clients’ businesses flowing – whether through payment management solutions or through bespoke trade finance solutions. Royal Bank of Scotland is leveraging Oracle GoldenGate and Oracle Data Integrator along with Oracle Business Intelligence Enterprise Edition and the Oracle Database for a variety of solutions. Mainly, Oracle GoldenGate and Oracle Data Integrator are used to feed their data warehouse – providing a real-time data integration solution that feeds transactional data to their analytics system in minutes to enable improved decision making with timely, accurate data for their business users. Oracle Data Integrator’s in-database transformation capabilities and its ability to integrate with Oracle GoldenGate for real-time data capture is the foundation of this implementation. This solution makes it such that changes happening in the analytics systems are available the same day they are deployed on the operational system with 100% data quality guaranteed. Additionally, the solution has helped to reduce their operational database size from 150GB to 10GB. Impressive! Now what if I told you this solution was built in 3 months and had a less than 6 month return on investment? That’s outstanding! The Yalumba Wine Company is situated in the Barossa Valley of Australia. It is the oldest family owned winery in Australia with a unique way of aging their wines in specially crafted 100 liter barrels. Did you know that “Yalumba” is Aboriginal for “all the land around”? The Yalumba Wine Company is growing rapidly, and was in need of introducing a more modern standard to the existing manufacturing processes to meet globalization demands, overall time-to-market, and better operational efficiency objectives of product development. The Yalumba Wine Company worked with a partner, Bristlecone to develop a unique solution whereby Oracle Data Integrator is leveraged to pull data from Salesforce.com and JD Edwards, in addition to their other pre-existing source systems, for consumption into their data warehouse. They have emphasized the overall ease of developing integration workflows with Oracle Data Integrator. The solution has brought better visibility for the business users, shorter data loading and transformation performance to their data warehouse with rapid incorporation of new data sources, and a solid future-proof foundation for their organization. Moving forward, they plan on leveraging more from Oracle’s Data Integration portfolio. Terrific! In addition to these two customers on Tuesday we featured many other important Oracle Data Integrator and Oracle GoldenGate customers. On Tuesday the GoldenGate panel included: Land O’Lakes, Smuckers, and Veolia Water. Besides giving us yummy nutrition and healthy water, these companies have another aspect in common. They all use GoldenGate to boost their ERP application. Please read the recap by Irem Radzik. On Wednesday, the ODI Panel included: Barry Ralston and Ryan Weber of Infinity Insurance, Paul Stracke of Paychex Inc., and Ian Wall of Vertex Pharmaceuticals for a session filled with interesting projects, use cases and approaches to leveraging Oracle Data Integrator. Please read the recap by Sandrine Riley for more. Thanks to everyone who joined with us and we hope to stay connected! To hear more about our Data Integration12c products join us in an upcoming webcast to learn more. Follow us www.twitter.com/ORCLGoldenGate or goto our website at www.oracle.com/goto/dataintegration

    Read the article

  • Why does an error appear every time I try to open the Ubuntu Software Center? [duplicate]

    - by askubuntu7639
    This question already has an answer here: How do I remove a broken software source? 3 answers There is a glitch on the Ubuntu Software Center and whenever I open it an error appears and it keeps loading and never opens. Why does this happen? I have installed Ubuntu 13.04 on a disk and partitioned it. Please help me and ask for excess information if you need it. If you know of any duplicates please show me them!! This is the output of a question someone asked me. SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list This next output is the output of cat /etc/apt/sources.list.d/medibuntu.list </div> <div style="float:left;"> <div class="textwidget"><script type="text/javascript"><!-- google_ad_client = "ca-pub-2917661377128354"; /* 160X600 Sidebar UX */ google_ad_slot = "9908287444"; google_ad_width = 160; google_ad_height = 600; //-- Recent Comments <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/ae5f4503d5f167f1cf62d3e36e8242b6?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Richard Syme</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/how-to-customize-you-vlc-hot-keys/#comment-13732">#</a> </p> </div> <div class="content" style="float:left;"><p>I dont have a clear button under the hotkeys. All i want to do is get rid of all hotkeys.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/ffabde94437e996a506e31e981bcf8fc?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Abin Thomas Mathew</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/install-lamp-server-in-centos-6-4-rhel-6-4/#comment-13727">#</a> </p> </div> <div class="content" style="float:left;"><p>Simple and easy to follow tutorial to install and start of phpMyAdmin. Thank you</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://0.gravatar.com/avatar/499ccc1154e9b8569b87413434220b91?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">SK</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/munich-giving-ubuntu-linux-cds-citizens/#comment-13725">#</a> </p> </div> <div class="content" style="float:left;"><p>I have Bosslinux and i used it for a while. Now i swiched to Ubuntu 13.04.</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/3dc2f7140bdd857dcdfe815a6e29aa6b?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author">Anon</h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13724">#</a> </p> </div> <div class="content" style="float:left;"><p>Do you know how much extra bloat is in Ubuntu these days? How the hell does anyone really know?</p> </article> <article> <div style="float:left; display:block; margin:0 10px 10px 0; border:1px solid #CCCCCC; padding:3px; width:35px; height:35px;"><img alt='' src='http://1.gravatar.com/avatar/9dd28d1cf5efe754fa58b53c1e6de401?s=35&amp;d=&amp;r=G' class='avatar avatar-35 photo' height='35' width='35' /></div> <div style="float:left;"> <h4 class="author"><a href="http://ambitiousgeeks.blogspot.com/" onclick="javascript:_gaq.push(['_trackEvent','outbound-commentauthor','http://ambitiousgeeks.blogspot.com']);" rel='external nofollow' class='url'>Ambition</a></h4> <p class="meta"> <time datetime="2013-09-24" pubdate>September 24, 2013</time> | <a class="permalink" href="http://www.unixmen.com/linus-torvalds-talks-backdoor-linuxcon/#comment-13723">#</a> </p> </div> <div class="content" style="float:left;"><p>True :)</p> </article> </div> <div style="float:left;"> &nbsp;<script type="text/javascript"> window.___gcfg = {lang: 'en-US'}; (function() {var po = document.createElement("script"); po.type = "text/javascript"; po.async = true;po.src = "https://apis.google.com/js/plusone.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(po, s); })(); <div class="execphpwidget"></div> </div> <div class="module2"> <div class="recentPost"> <h3 class="module-title2">Favorite Links</h3> <ul class='xoxo blogroll'> http://www.iticy.com']);"Cheap Hosting http://www.tuxmachines.org']);"TuxMachines.org http://www.ubuntugeek.com']);"UbuntuGeek.com http://www.stelinuxhost.com']);"Webdesign & SEO </ul> <img src="http://180016988.r.cdn77.net/wp-content/themes/unimax/images/bigLine.jpg" alt="" /> </div> </div> <div align="center" style="min-height:610px;"> <div class="execphpwidget"></div> <div class="textwidget"><a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" onclick="javascript:_gaq.push(['_trackEvent','outbound-widget','http://creativecommons.org']);"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="unixmen.com" property="cc:attributionName" rel="cc:attributionURL">unixmen.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en_US" >Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License</a>.</div> </div> </div> <!-- #primary .widget-area --> </div> Unixmen Archive Select Month September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 Tags Cloudandroid apache browser Centos chrome command line Debian eyecandy Fedora firefox games gaming gnome google karmic koala kde libreoffice Linux linux distribution LinuxMint lucid lynx maverick meerkat mysql news oneiric ocelot openoffice opensource opensuse oracle ppa Precise Pangolin release RHEL security server software themes tools ubuntu unix upgrade virtualbox vlc windows wine Unixmen Twitts Firefox 16, a treat for developers http://t.co/cnd27CzT Ubuntu 12.10 ‘Quantal Quetzal’: Beta 2 Sneak Peek http://t.co/hd4LwDOy Top 5 security Myths about Linux; and their realities http://t.co/zO1LgHST About Us Advertising Sitemap Privacy Contact Us Hire Us Copyright © 2008-2013 Unixmen.com . Maintained by Unixmen . /* */ jQuery(document).on('ready post-load', easy_fancybox_handler ); http://www.w3-edge.com/wordpress-plugins/ Page Caching using apc Database Caching 3/186 queries in 0.035 seconds using apc Content Delivery Network via 180016988.r.cdn77.net Served from: www.unixmen.com @ 2013-09-25 01:38:14 by W3 Total Cache

    Read the article

  • Developing a SQL Server Function in a Test-Harness.

    - by Phil Factor
    /* Many times, it is a lot quicker to take some pain up-front and make a proper development/test harness for a routine (function or procedure) rather than think ‘I’m feeling lucky today!’. Then, you keep code and harness together from then on. Every time you run the build script, it runs the test harness too.  The advantage is that, if the test harness persists, then it is much less likely that someone, probably ‘you-in-the-future’  unintentionally breaks the code. If you store the actual code for the procedure as well as the test harness, then it is likely that any bugs in functionality will break the build rather than to introduce subtle bugs later on that could even slip through testing and get into production.   This is just an example of what I mean.   Imagine we had a database that was storing addresses with embedded UK postcodes. We really wouldn’t want that. Instead, we might want the postcode in one column and the address in another. In effect, we’d want to extract the entire postcode string and place it in another column. This might be part of a table refactoring or int could easily be part of a process of importing addresses from another system. We could easily decide to do this with a function that takes in a table as its parameter, and produces a table as its output. This is all very well, but we’d need to work on it, and test it when you make an alteration. By its very nature, a routine like this either works very well or horribly, but there is every chance that you might introduce subtle errors by fidding with it, and if young Thomas, the rather cocky developer who has just joined touches it, it is bound to break.     right, we drop the function we’re developing and re-create it. This is so we avoid the problem of having to change CREATE to ALTER when working on it. */ IF EXISTS(SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’                                      and schema_name(schema_ID)=‘Dbo’)     DROP FUNCTION dbo.ExtractPostcode GO   /* we drop the user-defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘AddressesWithPostCodes’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.AddressesWithPostCodes GO /* we drop the user defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO   /* and now create the table type that we can use to pass the addresses to the function */ CREATE TYPE AddressesWithPostCodes AS TABLE ( AddressWithPostcode_ID INT IDENTITY PRIMARY KEY, –because they work better that way! Address_ID INT NOT NULL, –the address we are fixing TheAddress VARCHAR(100) NOT NULL –The actual address ) GO CREATE TYPE OutputFormat AS TABLE (   Address_ID INT PRIMARY KEY, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode )   GO CREATE FUNCTION ExtractPostcode(@AddressesWithPostCodes AddressesWithPostCodes READONLY)  /** summary:   > This Table-valued function takes a table type as a parameter, containing a table of addresses along with their integer IDs. Each address has an embedded postcode somewhere in it but not consistently in a particular place. The routine takes out the postcode and puts it in its own column, passing back a table where theinteger key is accompanied by the address without the (first) postcode and the postcode. If no postcode, then the address is returned unchanged and the postcode will be a blank string Author: Phil Factor Revision: 1.3 date: 20 May 2014 example:      – code: returns:   > Table of  Address_ID, TheAddress and ThePostCode. **/     RETURNS @FixedAddresses TABLE   (   Address_ID INT, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode   ) AS – body of the function BEGIN DECLARE @BlankRange VARCHAR(10) SELECT  @BlankRange = CHAR(0)+‘- ‘+CHAR(160) INSERT INTO @FixedAddresses(Address_ID, TheAddress, ThePostCode) SELECT Address_ID,          CASE WHEN start>0 THEN REPLACE(STUFF([Theaddress],start,matchlength,”),‘  ‘,‘ ‘)             ELSE TheAddress END            AS TheAddress,        CASE WHEN Start>0 THEN SUBSTRING([Theaddress],start,matchlength-1) ELSE ” END AS ThePostCode FROM (–we have a derived table with the results we need for the chopping SELECT MAX(PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)) AS start,         MAX( CASE WHEN PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)>0 THEN TheLength ELSE 0 END) AS matchlength,        MAX(TheAddress) AS TheAddress,        Address_ID FROM (SELECT –first the match, then the length. There are three possible valid matches         ‘%['+@BlankRange+'][A-Z][0-9] [0-9][A-Z][A-Z]%’, 7 –seven character postcode       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 8       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 9)      AS f(Matched,TheLength) CROSS JOIN  @AddressesWithPostCodes GROUP BY [address_ID] ) WORK; RETURN END GO ——————————-end of the function————————   IF NOT EXISTS (SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’)   BEGIN   RAISERROR (‘There was an error creating the function.’,16,1)   RETURN   END   /* now the job is only half done because we need to make sure that it works. So we now load our sample data, making sure that for each Sample, we have what we actually think the output should be. */ DECLARE @InputTable AddressesWithPostCodes INSERT INTO  @InputTable(Address_ID,TheAddress) VALUES(1,’14 Mason mews, Awkward Hill, Bibury, Cirencester, GL7 5NH’), (2,’5 Binney St      Abbey Ward    Buckinghamshire      HP11 2AX UK’), (3,‘BH6 3BE 8 Moor street, East Southbourne and Tuckton W     Bournemouth UK’), (4,’505 Exeter Rd,   DN36 5RP Hawerby cum BeesbyLincolnshire UK’), (5,”), (6,’9472 Lind St,    Desborough    Northamptonshire NN14 2GH  NN14 3GH UK’), (7,’7457 Cowl St, #70      Bargate Ward  Southampton   SO14 3TY UK’), (8,”’The Pippins”, 20 Gloucester Pl, Chirton Ward,   Tyne & Wear   NE29 7AD UK’), (9,’929 Augustine lane,    Staple Hill Ward     South Gloucestershire      BS16 4LL UK’), (10,’45 Bradfield road, Parwich   Derbyshire    DE6 1QN UK’), (11,’63A Northampton St,   Wilmington    Kent   DA2 7PP UK’), (12,’5 Hygeia avenue,      Loundsley Green WardDerbyshire    S40 4LY UK’), (13,’2150 Morley St,Dee Ward      Dumfries and Galloway      DG8 7DE UK’), (14,’24 Bolton St,   Broxburn, Uphall and Winchburg    West Lothian  EH52 5TL UK’), (15,’4 Forrest St,   Weston-Super-Mare    North Somerset       BS23 3HG UK’), (16,’89 Noon St,     Carbrooke     Norfolk       IP25 6JQ UK’), (17,’99 Guthrie St,  New Milton    Hampshire     BH25 5DF UK’), (18,’7 Richmond St,  Parkham       Devon  EX39 5DJ UK’), (19,’9165 laburnum St,     Darnall Ward  Yorkshire, South     S4 7WN UK’)   Declare @OutputTable  OutputFormat  –the table of what we think the correct results should be Declare @IncorrectRows OutputFormat –done for error reporting   –here is the table of what we think the output should be, along with a few edge cases. INSERT INTO  @OutputTable(Address_ID,TheAddress, ThePostcode)     VALUES         (1, ’14 Mason mews, Awkward Hill, Bibury, Cirencester, ‘,‘GL7 5NH’),         (2, ’5 Binney St   Abbey Ward    Buckinghamshire      UK’,‘HP11 2AX’),         (3, ’8 Moor street, East Southbourne and Tuckton W    Bournemouth UK’,‘BH6 3BE’),         (4, ’505 Exeter Rd,Hawerby cum Beesby   Lincolnshire UK’,‘DN36 5RP’),         (5, ”,”),         (6, ’9472 Lind St,Desborough    Northamptonshire NN14 3GH UK’,‘NN14 2GH’),         (7, ’7457 Cowl St, #70    Bargate Ward  Southampton   UK’,‘SO14 3TY’),         (8, ”’The Pippins”, 20 Gloucester Pl, Chirton Ward,Tyne & Wear   UK’,‘NE29 7AD’),         (9, ’929 Augustine lane,  Staple Hill Ward     South Gloucestershire      UK’,‘BS16 4LL’),         (10, ’45 Bradfield road, ParwichDerbyshire    UK’,‘DE6 1QN’),         (11, ’63A Northampton St,Wilmington    Kent   UK’,‘DA2 7PP’),         (12, ’5 Hygeia avenue,    Loundsley Green WardDerbyshire    UK’,‘S40 4LY’),         (13, ’2150 Morley St,     Dee Ward      Dumfries and Galloway      UK’,‘DG8 7DE’),         (14, ’24 Bolton St,Broxburn, Uphall and Winchburg    West Lothian  UK’,‘EH52 5TL’),         (15, ’4 Forrest St,Weston-Super-Mare    North Somerset       UK’,‘BS23 3HG’),         (16, ’89 Noon St,  Carbrooke     Norfolk       UK’,‘IP25 6JQ’),         (17, ’99 Guthrie St,      New Milton    Hampshire     UK’,‘BH25 5DF’),         (18, ’7 Richmond St,      Parkham       Devon  UK’,‘EX39 5DJ’),         (19, ’9165 laburnum St,   Darnall Ward  Yorkshire, South     UK’,‘S4 7WN’)       insert into @IncorrectRows(Address_ID,TheAddress, ThePostcode)        SELECT Address_ID,TheAddress,ThePostCode FROM dbo.ExtractPostcode(@InputTable)       EXCEPT     SELECT Address_ID,TheAddress,ThePostCode FROM @outputTable; If @@RowCount>0        Begin        PRINT ‘The following rows gave ‘;     SELECT Address_ID,TheAddress,ThePostCode FROM @IncorrectRows        RAISERROR (‘These rows gave unexpected results.’,16,1);     end   /* For tear-down, we drop the user defined table type */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO /* once this is working, the development work turns from a chore into a delight and one ends up hitting execute so much more often to catch mistakes as soon as possible. It also prevents a wildly-broken routine getting into a build! */

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

  • Troubleshooting major performance issue: Is culprit Intel RST, Hard drive, or something else?

    - by Sean Killeen
    The Setup I have the following components that come into play in this situation: ASUS P8Z68 V/PRO motherboard a RAID1 configuration (1x 1TB drive, 1 x 2TB drive -- I explain below), accelerated with an SSD using Intel's RST software, and 1 TB drive standing by as a spare. Core i7 2600k 32 GB RAM Windows 8.1 This box was designed to be beast, and until just recently, was very good at being just that. What's Happening The system has slowed to a crawl whenever it touches the disk. Things appear to work at normal speed when dealing with memory. For example, typing this is fine, but saving it to disk from notepad gave me a 5-7 second pause when clicking save. The disks appear to be at 100% all the time (e.g. the light on the disk access on the PC is solidly on -- not even any flashing) In ProcExp it appears that the disk is barely being utilized at all: Intel RST reports that everything is fine: Other Details Prior to this happening, RST had reported that my drives were failing (one went bad, one was throwing SMART events). This made sense; they were at the tail end of their warranty and the PC is on almost all the time. I RMA'd the drives via Seagate. In the meantime, I'd purchased a 2TB drive because I didn't realize that the 1TB drives were under warranty. I figured I'd replace the other 1 TB drive with another 2 TB when it died but then discovered the warranty. AFAIK, I haven't done any major updates since 8.1 and it worked fine after those. Question(s) How can I troubleshoot this? What is the best way to try to figure out why disks are being maxed out despite the OS reporting barely any disk usage and that everything is OK? Given the failures, etc. that I describe above, is it possible that the problem could be the I/O on the motherboard itself? If so, how would I even be able to diagnose it? I'm betting the drives that Seagate gave me are refurbished (didn't think to look; that's dumb). Is it possible that the same model drive, refurbished, could somehow cause this? In terms of how RAID1 works, is it possible that one drive is "falling behind" somehow, and that the RAID1 is constantly trying to fix the mirroring? If so, this seems like Intel RST would report on it, but I wanted to consider it as an option.

    Read the article

  • Small Business Server services will not start, and remote desktop and UAC are broken

    - by Stephen Jennings
    Yesterday I began setting up a server with Windows Small Business Server 2008. All I am configuring it for right now is to be a domain controller and Exchange server. I completed the initial setup of SBS then started looking through different connection options (allowing VPN versus using a TS Gateway). After I rebooted one time, I started having three not-obviously-related issues: First, I could no longer remote desktop into the computer. I ran TCPView and saw that it was no longer listening on port 3389. I checked everything in Terminal Service Configuration but everything shows the computer ought to be allowing connections. Also, when I tried to use anything that required user account control elevation, the UAC dialog never popped up and the program that was waiting just froze. If I try to run "regedit" from the Run box, for example, it never appears. When I run in safe mode which does not run with UAC, I was able to access everything. I didn't want to deal with it, so I turned off UAC and rebooted. Finally, in the Windows SBS Console, there are status indicators for Security, Updates, Backup, and Other Alerts. The first three get stuck saying "Querying". Looking in the computer alerts, I have events showing the following services stopped: Background Intelligent Transfer Service KtmRm for Distributed Transaction Coordinator Distributed Transaction Coordinator Microsoft Exchange Information Store Microsoft Exchange System Attendant Microsoft Exchange Transport Windows Remote Management Update Services Windows Update I figured I must have configured something wrong accidentally and I couldn't find anything using Google explaining what might be the case, so I just decided to format the hard drive and reinstall SBS from scratch. I did this and everything was working last night, but I just turned the machine back on and it is doing the same thing again! On my second install, I did not configure anything except the following (all from SBS Console): Connect to the Internet (set IP and router address) Turn off customer feedback. Set up internet address. Decline to use a Smart Host for email. Added one standard user account. Since this happened again and I was very careful the second time not to configure anything outside of the SBS Console, I feel like there's something else going on. Right now the machine is on an isolated network that does have internet access. My desktop is the only other machine plugged into this network. Any and all help is appreciated (before I tear my hair out!)

    Read the article

  • Ubuntu box static routing problem

    - by Rafael
    Hello, I'm trying to configure a ubuntu server to be a router. This is my interface configuration (eth2 connects to my WAN, eth0 to my LAN): auto eth2 iface eth2 inet static address 192.168.0.249 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 auto eth0 iface eth0 inet static address 192.168.100.1 netmask 255.255.255.0 This is the router information: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth2 And this is dhcp configuration: subnet 192.168.100.0 netmask 255.255.255.0 { range 192.168.100.101 192.168.100.254; option domain-name-servers 201.70.86.133; option routers 192.168.100.1; authoritative; } I'm then connecting a mac os x by cable on eth0. This is en0 interface configuration: en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:26:bb:5d:82:b0 inet6 fe80::226:bbff:fe5d:82b0%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.101 netmask 0xffffff00 broadcast 192.168.100.255 media: autoselect (100baseTX <full-duplex>) status: active And this is the routing table: Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.100.1 UGSc 139 32 en0 10.37.129/24 link#8 UC 2 0 vnic1 10.37.129.2 0:1c:42:0:0:9 UHLWI 0 839 lo0 10.37.129.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vnic1 10.211.55/24 link#7 UC 2 0 vnic0 10.211.55.2 0:1c:42:0:0:8 UHLWI 0 840 lo0 10.211.55.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vnic0 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 3 507924 lo0 169.254 link#4 UCS 0 0 en0 172.16.42/24 link#10 UC 2 0 vmnet8 172.16.42.1 0:50:56:c0:0:8 UHLWI 0 839 lo0 172.16.42.255 link#10 UHLWbI 1 24 vmnet8 192.168.100 link#4 UC 2 0 en0 192.168.100.1 0:e0:7c:7e:f:99 UHLWI 139 0 en0 777 192.168.100.101 127.0.0.1 UHS 0 0 lo0 192.168.100.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 en0 192.168.116 link#9 UC 2 0 vmnet1 192.168.116.1 0:50:56:c0:0:1 UHLWI 0 839 lo0 192.168.116.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vmnet1 When I ping 192.168.100.1, it works. When I ping 192.168.0.249, it also works. However, when I try to ping 192.168.0.1 it does not. Does anyone has any way to solve this? Is there a way to debug it? Thanks,

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • External USB HD issues with a twist (works on Windows7 but not XP)

    - by Eruditass
    I have this older external USB HD, 160 GB. I was using it to copy my Steam games to another computer. On the source computer, Windows 7 64-bit, everything worked fine. Drive reported no errors, had no hiccups, etc. Plugging it into the Windows XP 32-bit computer, it worked fine for looking through the files, moving files around on it (no real reading/writing, just modifying the filesystem table). However, when copying files from it to my internal HD, after a couple seconds to tens of minutes (seemingly random times), the USB device becomes unrecognized and it reports a delayed write error. Events in system log go like this, chronologically: (number times displayed)xSource (Event ID): "message" 2xdisk (51): An error was detected on device \Device\Harddisk1\D during a paging operation. 1xftdisk (57): The system failed to flush data to the transaction log. Corruption may occur. 1xApplication popup (26): Windows - Delayed Write Failed : Windows was unable to save all the data for the file E:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. 1xntfs (50): {Delayed Write Failed} Windows was unable to save all the data for the file . The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. These repeat for a while, then there is 10+ disk messages or ftdisk messages. Other notes: This occurs on random files at random times. This problem cannot be replicated on the Windows 7 source machine when copying from the HD to a different location on its local disk chkdsk /f was run and found no errors. chkdsk /f/r has the delayed write issue. drive was set to quick removal. Setting to performance in device manager yielded same result I am not writing anything to the USB external drive, so I am not sure why there is even a delayed write error (writing file access times?) local Windows XP was chkdsk'd without problems Windows XP machine has no problems with other USB HD's Various USB ports were attempted Rebooting did not help Occurs with SyncToy as well as windows explorer SMART status is good on both local drive and the external one Lack of gaming is making me cranky

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Best way to handle PHP sessions across Apache vhost wildcard domains

    - by joshholat
    I'm currently running a site that allows users to use custom domains (i.e. so instead of mysite.com/myaccount, they could have myaccount.com). They just change the A record of their domain and we then use a wildcard vhost on Apache to catch the requests from the custom domains. The setup is basically as seen below. The first vhost catches the mysite.com/myaccount requests and the second would be used for myaccount.com. As you can see, they have the exact same path and php cookie_domain. I've noticed some weird behavior surrounding the line below "#The line below me". When active, the custom domains get a new session_id every page load (that isn't the same as the non-custom domain session). However, when I comment that line out, the user keeps the same session_id on each page load, but that session_id is not the same as the one they'd see on a non-custom domain site either despite being completely on the same server. There is a sort of "hack" workaround involving redirecting the user to mysite.com/myaccount, getting the session ID, redirecting back to myaccount.com, and then using that ID on the myaccount.com. But that can get kind of messy (i.e. if the user logs out of mysite.com/myaccount, how does myaccount.com know?). For what it's worth, I'm using a database to manage the sessions (i.e. so there's no issues with being on different servers, etc, but that's irrelevant since we only use one server to handle all requests currently anyways). I'm fairly certain it is related to some sort of CSRF browser protection thing, but shouldn't it be smart enough to know it's on the same server? Note: These are subdomains, they're separate domains entirely (but on the same server). <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName mysite.local ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost> #Wildcard (custom domain) vhost <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName default ServerAlias * ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" # The line below me php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost>

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • file corruption on read/write 2.6.32-22-server (happens across many kernels)

    - by Jonathan
    Hi Guys, I'm having an issue where after the server has been up for a period of time (~week/few days) the server will start reading corrupt data. For instance when I run a sha1sum of a file after a fresh boot it remains the same. However after a while I will start to get segfaults and from then on whenever I read this file I get a different sha1sum. I've checked S.M.A.R.T with long tests and I've run an extended memtest86+(12 passes) My lspci is as follows: 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (int gfx) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3c) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon HD 3300 Graphics 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller 02:00.0 Ethernet controller: Atheros Communications Atheros AR8121/AR8113/AR8114 PCI-E Ethernet Controller (rev b0) 03:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403 I could really use some help on this, do you have any idea what could cause this? It's really frustrating me as it seems to trigger entirely randomly and will not go away until I reboot. I'm also use KVM for virtualization as well as MD for software RAID on this server and the processor is a Phenom II X4 965. I don't believe it's the software raid however as this affects files also hosted on non-raid partitions so I don't know.

    Read the article

  • apache sendmail: trying to change user "from" address from apache to domain account

    - by Wes
    I apologize if I am asking a question already answered, but my problem isn't really that I haven't found an answer. I have, in fact, found a half-dozen different "solutions" to my problem, tried them all, in various combinations, and have been consistently unsuccessful. The goal All I want to do is change the envelope "from" address for all email sent from [email protected] to [email protected], always. What I've already done I am running Apache, PHP, and sendmail on CentOS 5.5, [email protected]. We have an SMTP server at 192.168.0.4. The domain's email accounts are all at @domain.org. I have successfully set up "smart host" using this line in the sendmail.mc file: define(`SMART_HOST', `192.168.0.4')dnl Then I set up masquerading, and was hopeful this would solve it. I have this in the .mc file: FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`domain.org')dnl MASQUERADE_DOMAIN(`domain.org.')dnl MASQUERADE_DOMAIN(`localhost.localdomain.')dnl This rewrites "to" addresses, but not "from" addresses. Testing from the command line: sendmail -v [email protected] Always is shown from the local user (in this case root, or my local user account). I had read that "sendmail" command sometimes bypasses masquerading. Nevertheless, using the "mail" command has the same result. After that, I have explored several "solutions", including: mailertable virtusertable FEATURE(`accept_unresolvable_domains')dnl LOCAL_DOMAIN(`localhost.localdomain')dnl FEATURE(`genericstable')dnl /etc/mail/access file /etc/mail/local-host-names file /etc/mail/trusted-users file All to no affect. The last thing I've tried So, I decided to go in a different direction, and try to set the envelope "from" address via PHP, using either the configuration in /etc/php.ini, or adding the -f parameter to the mail() function or to sendmail command. If I run this command: sendmail -v -f [email protected] [email protected] I get this error in /var/log/maillog: Mar 30 08:56:16 localhost sendmail[24022]: p2UCuE8w024022: [email protected], size=5, class=0, nrcpts=1, msgid=<[email protected]>, relay=user@localhost Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: [email protected], [email protected] (500/502), delay=00:00:05, xdelay=00:00:03, mailer=relay, pri=30005, relay=[192.168.0.4] [192.168.0.4], dsn=5.1.1, stat=User unknown Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: p2UCuE8x024022: DSN: User unknown Mar 30 08:56:23 localhost sendmail[24022]: p2UCuE8x024022: [email protected], delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=31029, relay=[192.168.0.4] [192.168.0.4], dsn=2.0.0, stat=Sent (Ok: queued as B5E2E40E0A2) Which is basically a "User unknown" 550 error. Help Please help. What do I need to change? Should I just start over in the sendmail.mc file? It has a ton of config options stuffed in it, over days of trying things. Why is changing the envelope "from" address via the command line generating a "User unknown" error?

    Read the article

  • filtering itunes library items by file location

    - by Cawas
    3 answers and unfortunately no solution yet. The Problem I've got way more than 1000 duplicated items in my iTunes Library pointing to a non-existant place (the "where" under "get info" window), along with other duplicated items and other MIAs (Missing In Action). Is there any simple way to just delete all of them and only them? From the library, of course. By that I mean some MIAs are pointing to /Volumes while some are pointing to .../music/Music/... or just .../music/.... I want to delete all pointing to /Volumes as to later I'll recover the rest. Check the image below. Some Background I tried searching for a specific key word on the path and creating smart play list, but with no result. Being able to just sort all library by path would be a perfect solution! I believe old iTunes could do that. PowerTunes can do it (sort by path) but I can't do anything with its list. I would also welcome any program able to handle this, then import and properly export back the iTunes library. Since this seems to just not be clear enough... AppleScript doesn't work That's because AppleScript just can't gather the missing info anywhere in iTunes Library. Maybe we could use AppleScript by opening the XML file, but that's a whole nother issue. Here's a quote from my conversation with Doug the man himself Adams last december: I don't think you do understand. There is no way to get the path to the file of a dead track because iTunes has "forgotten" it. That is, by definition, what a dead track is. Doug On Dec 21, 2010, at 7:08 AM, Caue Rego wrote: yes I understand that and have seem the script. but I'm not looking for the file. just the old broken path reference to it. Sent from my iPhone On 21/12/2010, at 10:00, Doug Adams wrote: You cannot locate missing files of dead tracks because, by definition, a dead track is one that doesn't have any file information. If you look at "Super Remove Dead Tracks", you will notice it looks for tracks that have "missing value" for the location property.

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >