Search Results

Search found 24253 results on 971 pages for 'multiple monitor'.

Page 363/971 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • CodePlex Daily Summary for Saturday, August 16, 2014

    CodePlex Daily Summary for Saturday, August 16, 2014Popular ReleasesTEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastBinaryJSON: v1.4.2: v1.4.2 - bug fix circular referencesfastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.HP OneView PowerShell Library: HP OneView PowerShell Library 1.10.1193: [NOTE]: The installer has been updated, only to allow the user to display What's New at the completion of the install. The contents are the same as the original installer. Branch to HP OneView 1.10 Release. NOTE: This library version does not support older appliance versions. Fixed New-HPOVProfile to check for Firmware and BIOS management for supported platforms. Would erroneously error when neither -firmware or -bios were passed. Fixed Remove-HPOV* cmdlets which did not handle -forc...MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeEWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...Python Tools for Visual Studio: 2.1 RC: Release notes for PTVS 2.1 RC We’re pleased to announce the release candidate for Python Tools for Visual Studio 2.1. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, editing, IntelliSense, interactive debugging, profiling, Microsoft Azure, IPython, and cross-platform debugging support. PTVS 2.1 RC is available for: Visual Studio Expre...Sense/Net ECM - Enterprise CMS: SenseNet 6.3.1 Community Edition: Sense/Net 6.3.1 Community EditionSense/Net 6.3.1 is an important step toward a more modular infrastructure, robustness and maintainability. With this release we finally introduce a packaging and a task management framework, and the Image Editor that will surely make the job of content editors more fun. Please review the changes and new features since Sense/Net 6.3 and give a feedback on our forum! Main new featuresSnAdmin (packaging framework) Task Management Image Editor OData REST A...Touchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesModern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...New Projects2113110030: name: pham van long code: 2113110030 subject: oop2113110033: Name: Nguyen Hoang Minh Class: CCQ1311LA Object: OOP2113110284: name: Vuong Thành Ðô id:2113110284 class: CCQ1311LA2113110286_OOP_kiemtra: Mon:OOP Tên: Lê Th? Ng?c Huy?nCRM Queue Monitor: A small tool to monitor queues in Microsoft dynamics CRM 2011 and following versions. It displays the number of items in the queues and the latest item.Dice Bag: A D20 Role Playing Game Dice Bag - A selection of dice for use in the D20 RPG System that can be rolled to any quantity with an applied modifier.DM.Dual-coloredBall: DM.Dual-coloredBallFB Account Data Miner by Bipul Raman: A software which can be use to extract basic metadata of a Facebook profile without logging in to Facebook.huynhtanphat-2113170373: Mon: OPP Name: Huynh Tan Phatkieuquanghuy_OOP: Suject: OOP Name: kieuquanghuy Class: CCQ1311LAMySale: A simple home point-of-sale application, designed for garage sales, and lemonade stalls alike.nguyennhubaongan_OOP: MON: OOP NAME: NGUY?N-NHU-B?O-NGÂNOPP-2113110288: Mon: OOP Ten: Bui Dinh Hoai Nam Pequeño RAE: Una aplicación Windows para utilizar los Web services del Diccionario de la Real Academia Española en línea.phungthiphuonglien_OOP: MONHOC: OOP NAME: PHUNG THI PHUONG LIEN MSSV: 2113110287sharpFlipWall: This is a simple executive toy for Unity that leverages Kinect v2 and Shaders to generate a wall of blocks that move based on player informaitonTranThanhDanh-2113110282: Mon: OPP Name: Tran Thanh DanhWsSequence: Run a number of WS in a sequence?????: ??????????: ???????????: gdsg?????: ???????????: fds??????: gdr??????: gfdg?????: ???????????: ???????????: ???????????: ggerger?????: htryhrt??????: trjty?????: ???????????: sdf?????: ?????QQ:2281595668,?????,????,????。??????????????????????。???????????,????????,????????????????????????????。???????,??????????????。????????????,????????,?????????????: ????????????: ertyer?????: ???????????: gsdrfgds??????: fds?????: ??????????: vcdfxgdsf??????: fdgher??????: fdsf?????: ??????????: ??????????: ???????????: hiuhui?????: ??????????: ??????????: ???????????: fdsfs?????: ??????????: ??????????: ??????????: ??????????: ??????????: ??????????: ??????????: ???????????: ???????????: ??????????: ??????????: fgsdf??????: vdsfd?????: ?????QQ:2281595668,?????,????,????。??????????1998?????????。????????????,???????????????,?????????????,???????????,?2003?????????????,????????????????????????????????: ???????????: ???????????: hfdg?????: ???????????: gfdgfd??????: fdsfd?????: fdsf??????: fghdt?????: ??????????: ??????????: ??????????: gfdgfd????????: gfjhtf??????: ????????????: vdcf??????: fvgdfg??????: ???????????: ??????????: ???????????: jvbhvhv?????: ??????????: ???????????: ???????????: ??????????: ???????????: ????????????: ????????????: ???????????: ???????????: ????????????: fdsfds?????: ???????????: ???????????: ??????????: ?????QQ:2281595668,?????,????,????。???????????????????,??????????????,????,?????????????????????,??????。   ??,??????????????????????。????????,????????????,??????????: ??????????: ??????????: ??????????: ?????QQ:2281595668,?????,????,????。??????????????,??????、????、?????????????????????????,???????????? ??。???????????,??????????,???????????,????2000?,??????????,?????: ???????????: ????????????: ???????????: ???????????: ytryrt??????: ???????????: ???????????: ???????????: ??????????: ???????????: gdfgfd?????: ???????????: gfd??????: ???????????: ???????????: fdsf??????: ????????????: ????????????: gfdtgdr?????: ???????????: fdsfd?????: ??????????: ???????????: ????????????: ????????????: ???????????: ???????????: terwtq?????: ??????????: gdfg??????: ????????????: gfdg?????: ??????????: ??????????: ?????QQ:2281595668,?????,????,????。?????????????,????????????????、??????、???????、???????、?????、???、??????。 ??????????????,???????????,??????,???????,?????????????????: gdfsg?????: fdsf??????: hdfhdf?????: ???????????: ????????????: fgherh?????: ??????????: ?????QQ:2281595668,?????,????,????。?????????,????????????????????,???????????????????????????????????????????????????????????、?????????、??????、????????、?????????????: ???????????: ???????????: ???????????: ????????????: fgdstf??????: ???????????: gfdgfd??????: fdsfd??????: ????????????: ???????????: ???????????: fdsfd?????: ???????????: gfdgfdg?????: ??????????: ?????QQ:2281595668,?????,????,????。????????????????,???????,???QQ;??2008?8??????????????????,????????,?????????,?????????????,?????????????????. ?????????,??????????: ??????????: ???????????: ????????????: fgnhgf??????: gredg?????: ??????????: ??????????: ?????QQ:2281595668,?????,????,????。?????????????,???????????????????、?????????????。?????????????????,?????????????,????????????,?????,????????????????,?????,?????????: ????????????: ???????????: ???????????: ????????????: ???????????: gdfgedf??????: fdsf??????: ?????????????: ?????????????: fdsf??????: ???????????: ?????QQ:2281595668,?????,????,????。??????????????,?????????????????????,???????????????????????!?????????????????????????????????????????????、?????????、??????、??????: fdsf?????: ???????????: fdsf??????: fdsf?????: ???????????: vuv?????: ???????????: grfgfd?????: ??????????: ??????????: ??????????: ??????????: fdsf??????: ghrd?????: ????????????: ?????????????: sgdfg?????: ???????????: grfgdf?????: ???????????: hftghj?????: ???????????: ??????????????: gdfgfd?????: ???????????: fdsf?????: ???????????: fdsf??????: htgrfh?????: ??????????: ??????????: fds??????: sdfds??????: hgfh?????: ??????????: ?????QQ:2281595668,?????,????,????。??????????????,??????、????、?????????????????????????,???????????? ??。???????????,??????????,???????????,????2000?,??????????,??????: ????????????: ???????????: fdsfds????????????: fdsf?????: ??????????: ???????????: gfdgfd?????: ??????????: ??????????: ??????????: ?????QQ:2281595668,?????,????,????。??????????????,?????????????????????,???????????????????????!?????????????????????????????????????????????、?????????、??????、??????: ?????QQ:2281595668,?????,????,????。????????2002?,????????,???????????????,??,?????????。????98?????????????,?????????????,?????????????????,???????,????,?????????????: ??????????: ???????????: ????????????: fd??????: fds?????: ?????QQ:2281595668,?????,????,????。??????????、??、?????????,???????????,?????????????!???????、??、?????、???.????、???、???、???、???、???、?????、?????、???????、????。?????????: ??????????: fdsfds???????: fdsfdsf?????: ???????????: ???????????: ??????????: ???????????: ????????????: fdsf?????: ???????????: ????????????: gdfsgds?????: gttrey??????: cxzc???????: ?????????????: ???????????: ???????????: gdfgfd?????: ??????????: ???????????: ????????????: ????????????: ????????????: gfdgdf?????: ???????????: ytu?????: ???????????: yytry???????: ghmkuygk??????: ????????????: ????????????: ????????????: ???????????: ???????????: ????????????: vuhgvu??????: ??????QQ:2281595668,?????,????,????。?????????,????????????????????,????????????????????????????????????????????????????????????、?????????、??????、????????、???????????: ???????????: hfgh??????: hgfh?????: ?????QQ:2281595668,?????,????,????。??????????????,???????????。??????????????。??、??、????????????。????:????,??????!??????????????,?????,???,???,??,???,???,???,?????????: ????????????: ttgers??????: iui

    Read the article

  • How do I set up live audio streams to a DLNA compliant device?

    - by Takkat
    Is there a way to stream the live output of the soundcard from our 12.04.1 LTS amd64 desktop to a DLNA-compliant external device in our network? Selecting media content in shared directories using Rygel, miniDLNA, and uShare is always fine - but so far we completely failed to get a live audio stream to a client via DLNA. Pulseaudio claims to have a DLNA/UPnP media server that together with Rygel is supposed to do just this. But we were unable to get it running. We followed the steps outlined in live.gnome.org, this answer here, and also in another similar guide. As soon as we select the local audio device, or our GST-Launch stream in the DLNA client Rygel displays the following message and the client states it reached the end of the playlist: (rygel:7380): Rygel-WARNING **: rygel-http-request.vala:97: Invalid seek request This is how we configured GST-Launch in rygel.conf: [GstLaunch] enabled=true launch-items=mypulseaudiosink mypulseaudiosink-title=Audio on @HOSTNAME@ mypulseaudiosink-mime=audio/x-wav mypulseaudiosink-launch=pulsesrc device=<device> ! wavpackenc For <device> we tried with the default sink name, this name appended with .monitor, and in addition with upnp-sink and upnp.monitor that was created when we selected DLNA media server from paprefs. We also tried to encode using lamemp3enc with no luck. These are our pulseaudio modules: http://paste.ubuntu.com/1202913/ These are our sinks: http://paste.ubuntu.com/1202916/ Did we miss any other additional configuration needed to get this running? Are there any other alternatives for sending the audio of our soundcard as live stream to a DLNA client?

    Read the article

  • Bash script throws, "syntax error near unexpected token `}'" when ran

    - by Tab00
    I am trying to write a script to monitor some battery statuses on a laptop running as a server. To accomplish this, I have already started to write this code: #! /bin/bash # A script to monitor battery statuses and send out email notifications #take care of looping the script for (( ; ; )) do #First, we check to see if the battery is present... if(cat /proc/acpi/battery/BAT0/state | grep 'present: *' == present: yes) { #Code to execute if battery IS present #No script needed for our application #you may add scripts to run } else { #if the battery IS NOT present, run this code sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is either missing, or removed. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #Second, we check into the current state of the battery if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charging') { #Code to execute if battery is charging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is charging. This MIGHT mean that something just happened" -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging, is it discharging? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: discharging') { #Code to run if the battery is discharging sendemail -f [email protected] -t 214*******@txt.att.net -u NTA TV Alert -m "The battery from the computer is discharging. This shouldn't be happening. Please check ASAP." -s smtp.gmail.com -o tls=yes -xu [email protected] -xp *********** } #If it isn't charging or discharging, is it charged? else if(cat /proc/acpi/battery/BAT0/state | grep 'charging state: *' == 'charging state: charged') { #Code to run if battery is charged } done I'm pretty sure that most of the other stuff works correctly, but I haven't been able to try it because it will not run. whenever I try and run the script, this is the error that I get: ./BatMon.sh: line 15: syntax error near unexpected token `}' ./BatMon.sh: ` }' is the error something super simple like a forgotten semicolon? Thanks -Tab00

    Read the article

  • Get the Latest on MySQL Enterprise Edition

    - by monica.kumar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Font Definitions */ @font-face {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0; mso-font-charset:2; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:0 268435456 0 0 -2147483648 0;}@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-1610611985 1073750139 0 0 159 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; mso-themecolor:hyperlink; text-decoration:underline; text-underline:single;}a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:purple; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoPapDefault {mso-style-type:export-only; margin-bottom:10.0pt; line-height:115%;}@page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;}div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:595597020; mso-list-type:hybrid; mso-list-template-ids:1001697690 67698689 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}@list l0:level1 {mso-level-number-format:bullet; mso-level-text:?; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-.25in; font-family:Symbol;}ol {margin-bottom:0in;}ul {margin-bottom:0in;}--Oracle just announced MySQL 5.5 Enterprise Edition. MySQL Enterprise Edition is a comprehensive subscription that includes:- MySQL Database- MySQL Enterprise Backup- MySQL Enterprise Monitor- MySQL Workbench- Oracle Premier Support; 24x7, WorldwideNew in this release is the addition of MySQL Enterprise Backup and MySQL Workbench along with enhancements in MySQL Enterprise Monitor. Recent integration with MyOracle Support allows MySQL customers to access the same support infrastructure used for Oracle Database customers. Joint MySQL and Oracle customers can experience faster problem resolution by using a common technical support interface. Supporting multiple operating systems, including Linux and Windows, MySQL Enterprise Edition can enable customers to achieve up to 90 percent TCO savingsover Microsoft SQL Server. See what Booking.com is saying:“With more than 50 million unique monthly visitors, performance and uptime are our first priorities,” said Bert Lindner, Senior Systems Architect, Booking.com. “The MySQL Enterprise Monitor is an essential tool to monitor, tune and manage our many MySQL instances. It allows us to zoom in quickly on the right areas, so we can spend our time and resources where it matters.” Read the press release for detailson technology enhancements.

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • One eye on my dinner and one eye on SQL server

    - by fatherjack
    LiveJournal Tags: RedGate,Work Life Balance,Tips and Tricks,SQL Server This is somewhere between a Tweet and a proper blog article - would that be a Bleet? Anyway, I was at a local restaurant yesterday and after placing my order I was thinking about having to get home and log in to check some SQL Servers and then the thought came to me that as we were near civilisation there was likely to be a 3G signal that might actually make using the web browser on my phone bearable. It was surprisingly fast on my HTC Desire, it was almost as good as Wi-Fi. RedGate SQL Monitor works fine on the default HTC browser and here is the proof, me checking the servers while I am waiting for the meal to arrive. Everything checked out OK so I had the evening free from SQL Server. You can get a free 14 day full trial of a SQL Monitor from RedGate here or find out more about it at The Future of Monitoring. Disclosure: I am a friend of RedGate and as such regularly make positive comments about their products. I don't get paid for it but I do get free licenses for testing and reviewing purposes.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Resolution stuck after playing OpenGL game

    - by kit.yang
    I used to start the game,Frozen Throne (using wine) with the option of "-opengl".When I entered the game,the resolution will changed,and restored after exit the game. But this time a problem happened.The resolution can't restore although I restart my computer several times. Both the Ubuntu pane and login windows are exceptional. nvidia-settingsalso detect the resolution is "1024 x 768",But it seemed useless using this tool. Screenshot-NVIDIA X Server Settings: the result of xrandr: Screen 0: minimum 320 x 240, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 50.0* 800x600 51.0 52.0 53.0 680x384 54.0 55.0 640x480 56.0 576x432 57.0 512x384 58.0 400x300 59.0 60.0 61.0 320x240 62.0 the configure of /etc/X11/xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildd@yellow) Fri Apr 9 11:51:21 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor0" VendorName "Unknown" ModelName "CRT-0" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Entry Graphics" EndSection Section "Screen" # Removed Option "metamodes" "1024x768 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1024x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Desktop Fun: Need for Speed Wallpaper Collection

    - by Asian Angel
    Are you a passionate fan of the Need for Speed series or racing games in general? Then start your engines, turn up the radio, and get ready to race with our Need for Speed Wallpaper collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. Note: At 6236*2268 pixels this last wallpaper will need to be decreased in size before being placed on an appropriately sized white background matching your monitor’s resolution. For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • J2EE Applications, SPARC T4, Solaris Containers, and Resource Pools

    - by user12620111
    I've obtained a substantial performance improvement on a SPARC T4-2 Server running a J2EE Application Server Cluster by deploying the cluster members into Oracle Solaris Containers and binding those containers to cores of the SPARC T4 Processor. This is not a surprising result, in fact, it is consistent with other results that are available on the Internet. See the "references", below, for some examples. Nonetheless, here is a summary of my configuration and results. (1.0) Before deploying a J2EE Application Server Cluster into a virtualized environment, many decisions need to be made. I'm not claiming that all of the decisions that I have a made will work well for every environment. In fact, I'm not even claiming that all of the decisions are the best possible for my environment. I'm only claiming that of the small sample of configurations that I've tested, this is the one that is working best for me. Here are some of the decisions that needed to be made: (1.1) Which virtualization option? There are several virtualization options and isolation levels that are available. Options include: Hard partitions:  Dynamic Domains on Sun SPARC Enterprise M-Series Servers Hypervisor based virtualization such as Oracle VM Server for SPARC (LDOMs) on SPARC T-Series Servers OS Virtualization using Oracle Solaris Containers Resource management tools in the Oracle Solaris OS to control the amount of resources an application receives, such as CPU cycles, physical memory, and network bandwidth. Oracle Solaris Containers provide the right level of isolation and flexibility for my environment. To borrow some words from my friends in marketing, "The SPARC T4 processor leverages the unique, no-cost virtualization capabilities of Oracle Solaris Zones"  (1.2) How to associate Oracle Solaris Containers with resources? There are several options available to associate containers with resources, including (a) resource pool association (b) dedicated-cpu resources and (c) capped-cpu resources. I chose to create resource pools and associate them with the containers because I wanted explicit control over the cores and virtual processors.  (1.3) Cluster Topology? Is it best to deploy (a) multiple application servers on one node, (b) one application server on multiple nodes, or (c) multiple application servers on multiple nodes? After a few quick tests, it appears that one application server per Oracle Solaris Container is a good solution. (1.4) Number of cluster members to deploy? I chose to deploy four big 64-bit application servers. I would like go back a test many 32-bit application servers, but that is left for another day. (2.0) Configuration tested. (2.1) I was using a SPARC T4-2 Server which has 2 CPU and 128 virtual processors. To understand the physical layout of the hardware on Solaris 10, I used the OpenSolaris psrinfo perl script available at http://hub.opensolaris.org/bin/download/Community+Group+performance/files/psrinfo.pl: test# ./psrinfo.pl -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7)   The core has 8 virtual processors (8-15)   The core has 8 virtual processors (16-23)   The core has 8 virtual processors (24-31)   The core has 8 virtual processors (32-39)   The core has 8 virtual processors (40-47)   The core has 8 virtual processors (48-55)   The core has 8 virtual processors (56-63)     SPARC-T4 (chipid 0, clock 2848 MHz) The physical processor has 8 cores and 64 virtual processors (64-127)   The core has 8 virtual processors (64-71)   The core has 8 virtual processors (72-79)   The core has 8 virtual processors (80-87)   The core has 8 virtual processors (88-95)   The core has 8 virtual processors (96-103)   The core has 8 virtual processors (104-111)   The core has 8 virtual processors (112-119)   The core has 8 virtual processors (120-127)     SPARC-T4 (chipid 1, clock 2848 MHz) (2.2) The "before" test: without processor binding. I started with a 4-member cluster deployed into 4 Oracle Solaris Containers. Each container used a unique gigabit Ethernet port for HTTP traffic. The containers shared a 10 gigabit Ethernet port for JDBC traffic. (2.3) The "after" test: with processor binding. I ran one application server in the Global Zone and another application server in each of the three non-global zones (NGZ):  (3.0) Configuration steps. The following steps need to be repeated for all three Oracle Solaris Containers. (3.1) Stop AppServers from the BUI. (3.2) Stop the NGZ. test# ssh test-z2 init 5 (3.3) Enable resource pools: test# svcadm enable pools (3.4) Create the resource pool: test# poolcfg -dc 'create pool pool-test-z2' (3.5) Create the processor set: test# poolcfg -dc 'create pset pset-test-z2' (3.6) Specify the maximum number of CPU's that may be addd to the processor set: test# poolcfg -dc 'modify pset pset-test-z2 (uint pset.max=32)' (3.7) bash syntax to add Virtual CPUs to the processor set: test# (( i = 64 )); while (( i < 96 )); do poolcfg -dc "transfer to pset pset-test-z2 (cpu $i)"; (( i = i + 1 )) ; done (3.8) Associate the resource pool with the processor set: test# poolcfg -dc 'associate pool pool-test-z2 (pset pset-test-z2)' (3.9) Tell the zone to use the resource pool that has been created: test# zonecfg -z test-z1 set pool=pool-test-z2 (3.10) Boot the Oracle Solaris Container test# zoneadm -z test-z2 boot (3.11) Save the configuration to /etc/pooladm.conf test# pooladm -s (4.0) Results. Using the resource pools improves both throughput and response time: (5.0) References: System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones Capitalizing on large numbers of processors with WebSphere Portal on Solaris WebSphere Application Server and T5440 (Dileep Kumar's Weblog)  http://www.brendangregg.com/zones.html Reuters Market Data System, RMDS 6 Multiple Instances (Consolidated), Performance Test Results in Solaris, Containers/Zones Environment on Sun Blade X6270 by Amjad Khan, 2009.

    Read the article

  • Java ME SDK 3.0.5 is released!

    - by SungmoonCho
      Java ME SDK 3.0.5 went live! For many months, we have been working hard to fix bugs from previous version, and add a lot of new features demanded by Java ME community. You can download the new version from this link. Please see below for more information. NetBeans Integration All Java ME tools are implemented as NetBeans plugins. Device Manager Java ME SDK now supports multiple device managers. You can switch between different versions of device managers. LWUIT 1.5 Support The Resource Editor is available from the Java ME menu to help you design and organize resources for LWUIT applications. For a description of LWUIT 1.5 features, visit the LWUIT download page Network Monitor Integrated with NetBeans profiling tools, the Network Monitor now supports WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. CPU Profiler Now uses standard NetBeans profiling facilities to view snapshots. Profiling of VM classes can also be toggled on or off. WURFL Device Database The database has been updated with more than 1000 new devices. Tracing - New tracing functionality now includes CLDC VM events, and monitors events such as exceptions, class loading, garbage collection, and methods invocation. New or updated JSR support - Includes support for JSR 234 (Advanced Multimedia Supplements), JSR 253 (Mobile Telephony API), JSR 257 (Contactless Communication API), JSR 258 (Mobile User Interface Customization API), and JSR 293 (XML API for Java ME).

    Read the article

  • Short Look at Frends Helium 2.0 Beta

    - by mipsen
    Pekka from Frends gave me the opportunity to have a look at the beta-version of their Helium 2.0. For all of you, who don't know the tool: Helium is a web-application that collects management-data from BizTalk which you usually have to tediously collect yourself, like performance-data (throttling, throughput (like completed Orchestrations/hour), other perfomance-counters) and data about the state of BTS-Applications and presents the data in clearly structured diagrams and overviews which (often) even allow drill-down.  Installing Helium 2 was quite easy. It comes as an msi-file which creates the web-application on IIS. Aditionally a windows-service is deployt which acts as an agent for sending alert-e-mails and collecting data. What I missed during installation was a link to the created web-app at the end, but the link can be found under Program Files/Frends... On the start-page Helium shows two sections: An overview about the BTS-Apps (Running?, suspended messages?) Basic perfomance-data You can drill-down into the BTS-Apps further, to see ReceiveLocations, Orchestrations and SendPorts. And then a very nice feature can be activated: You can set a monitor to each of the ports and/or orchestrations and have an e-mail sent when a threshold of executions/day or hour is not met. I think this is a great idea. The following screeshot shows the configuration of this option. Conclusion: Helium is a useful monitoring  tool for BTS-operations that might save a lot of time for collecting data, writing a tool yourself or documentation for the operations-staff where to find the data. Pros: Simple installation Most important data for BTS-operations in one place Monitor for alerts, if throughput is not met Nice Web-UI Reasonable price Cons: Additional Perormance-counters cannot be added Im am not sure when the final version is to be shipped, but you can see that on Frend's homepage soon, I guess... A trial version is available here

    Read the article

  • My sound stopped working today, how can I fix it?

    - by Oli
    This seems to be a problem with pulseaudio. I was logged in over VNC on my phone and started playing a video this caused X to crash (as sometimes happens). I restarted and suddenly the sound doesn't work. I have a Intel HDA/Realtek ALC889 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller alsamixer is detecting this just fine. PulseAudio doesn't detect this alsa device so is using auto_null as the default sink (logs below). When I properly kill PulseAudio (tell it not to auto-start) direct ALSA communication with the sound card works just fine. speaker-test, for example, works. So the hardware and ALSA layers are fine IMO. In the logs, it seems that the card might be "busy" but I really don't know how or why it would be now (and never before). Is there an ALSA lock file somewhere that it still there because of my crash? I just ran sudo fuser /dev/snd/* and saw this: oli@bert:~$ sudo fuser /dev/snd/* /dev/snd/controlC0: 1884 /dev/snd/pcmC0D0c: 1884m /dev/snd/timer: 1884 A look at the process list (ps aux | grep 1884) tells me process 1884 is arecord -c 1 -f S16_LE -r 8000 -t raw. No idea what this is or why it's running. When I try and kill arecord (as root), it just respawns and rebinds on the hardware. I'm in a very annoying situation where I don't know what is going on and don't know how to find out. I'm open to all suggestions to get this working again. Fire away. And here's what I get when I stop PA auto-loading, kill it and then start it with -vvvv. oli@bert:~$ pulseaudio -vvvvv I: main.c: setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted I: main.c: setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted D: core-rtclock.c: Timer slack is set to 50 us. D: core-util.c: RealtimeKit worked. I: core-util.c: Successfully gained nice level -11. I: main.c: This is PulseAudio 0.9.21-63-gd3efa-dirty D: main.c: Compilation host: x86_64-pc-linux-gnu D: main.c: Compilation CFLAGS: -g -O2 -g -Wall -O3 -Wall -W -Wextra -pipe -Wno-long-long -Winline -Wvla -Wno-overlength-strings -Wunsafe-loop-optimizations -Wundef -Wformat=2 -Wlogical-op -Wsign-compare -Wformat-security -Wmissing-include-dirs -Wformat-nonliteral -Wold-style-definition -Wpointer-arith -Winit-self -Wdeclaration-after-statement -Wfloat-equal -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wmissing-declarations -Wmissing-noreturn -Wshadow -Wendif-labels -Wcast-align -Wstrict-aliasing=2 -Wwrite-strings -Wno-unused-parameter -ffast-math -Wp,-D_FORTIFY_SOURCE=2 -fno-common -fdiagnostics-show-option D: main.c: Running on host: Linux x86_64 2.6.38-rc3 #1 SMP Tue Feb 1 10:53:04 GMT 2011 D: main.c: Found 8 CPUs. I: main.c: Page size is 4096 bytes D: main.c: Compiled with Valgrind support: no D: main.c: Running in valgrind mode: no D: main.c: Running in VM: no D: main.c: Optimised build: yes D: main.c: All asserts enabled. I: main.c: Machine ID is 8310740c4729ef474fe5ecec4bbf5a6b. I: main.c: Session ID is 8310740c4729ef474fe5ecec4bbf5a6b-1297338553.571075-1050119523. I: main.c: Using runtime directory /home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-runtime. I: main.c: Using state directory /home/oli/.pulse. I: main.c: Using modules directory /usr/lib/pulse-0.9.21/modules. I: main.c: Running in system mode: no I: main.c: Fresh high-resolution timers available! Enjoy ol' chap! I: cpu-x86.c: CPU flags: CMOV MMX SSE SSE2 SSE3 SSSE3 SSE4_1 SSE4_2 I: svolume_mmx.c: Initialising MMX optimized functions. I: remap_mmx.c: Initialising MMX optimized remappers. I: svolume_sse.c: Initialising SSE2 optimized functions. I: remap_sse.c: Initialising SSE2 optimized remappers. I: sconv_sse.c: Initialising SSE2 optimized conversions. D: memblock.c: Using shared memory pool with 1024 slots of size 64.0 KiB each, total size is 64.0 MiB, maximum usable slot size is 65472 D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes.tdb' I: module-device-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes'. I: module.c: Loaded "module-device-restore" (index: #0; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes.tdb' I: module-stream-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes'. I: module.c: Loaded "module-stream-restore" (index: #1; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database.tdb' I: module-card-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database'. I: module.c: Loaded "module-card-restore" (index: #2; argument: ""). I: module.c: Loaded "module-augment-properties" (index: #3; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards. I: module.c: Loaded "module-udev-detect" (index: #4; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-bluetooth-discover.so': success D: dbus-util.c: Successfully connected to D-Bus system bus ba7c9a1f90b3d49d930bca2100000015 as :1.62 D: bluetooth-util.c: dbus: interface=org.freedesktop.DBus, path=/org/freedesktop/DBus, member=NameAcquired D: bluetooth-util.c: Bluetooth daemon is apparently not available. I: module.c: Loaded "module-bluetooth-discover" (index: #5; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-esound-protocol-unix.so': success I: module.c: Loaded "module-esound-protocol-unix" (index: #6; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #7; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-gconf.so': success I: module.c: Loaded "module-gconf" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'auto_null' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'auto_null.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). D: module-always-sink.c: Autoloading null-sink as no other sinks detected. I: sink.c: Created sink 0 "auto_null" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: sink.c: device.description = "Dummy Output" I: sink.c: device.class = "abstract" I: sink.c: device.icon_name = "audio-card" D: core-subscribe.c: Dropped redundant event due to change event. I: source.c: Created source 0 "auto_null.monitor" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: source.c: device.description = "Monitor of Dummy Output" I: source.c: device.class = "monitor" I: source.c: device.icon_name = "audio-input-microphone" D: module-null-sink.c: Thread starting up I: module.c: Loaded "module-null-sink" (index: #11; argument: "sink_name=auto_null sink_properties='device.description="Dummy Output"'"). I: module.c: Loaded "module-always-sink" (index: #12; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #13; argument: ""). D: module-suspend-on-idle.c: Sink auto_null becomes idle, timeout in 5 seconds. I: module.c: Loaded "module-suspend-on-idle" (index: #14; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session1" D: module-console-kit.c: Added new session /org/freedesktop/ConsoleKit/Session1 I: module.c: Loaded "module-console-kit" (index: #15; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #16; argument: ""). D: dbus-util.c: Successfully connected to D-Bus session bus efbffc6788fad56cfd64d40c00000018 as :1.182 D: main.c: Got org.pulseaudio.Server! I: main.c: Daemon startup complete. I: client.c: Created 1 "Native client (UNIX socket client)" I: client.c: Created 2 "Native client (UNIX socket client)" D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: module-augment-properties.c: Looking for .desktop file for gnome-volume-control-applet D: module-augment-properties.c: Looking for .desktop file for gnome-settings-daemon D: core-subscribe.c: Dropped redundant event due to change event. I: module-suspend-on-idle.c: Sink auto_null idle for too long, suspending ... D: sink.c: Suspend cause of sink auto_null is 0x0004, suspending Note the one section that seems to find the hardware but says it's busy (no idea if this is relevant). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards.

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • Does the hdd run more in ubuntu?

    - by starcorn
    Hello, This is something that's been bothering me, and I would like to know if it's an issue that's known. OK, I have monitored the hdd temperature, for a couple of days, when running in Ubuntu and Windows7. I have both OS installed on the same laptop, and I'm using Speedfan to monitor the hdd temp in Windows7, and hddtemp to monitor on Ubuntu. When running on windows7 the hdd usually stay around 37-39. This is on the load of when just web browsing, watch movies, and programming. And when I do the same thing on Ubuntu the hdd will go to 40-42. Most of the time however it stay 41-42 degree. Btw, even when just idling in Ubuntu the hdd will go over 40 degrees. This isn't a really big issue maybe since I read that hdd can handle temperature to at least 60 degree. However since the hdd is located just where I put my right palm, so it is quite disturbing at some times. Is this temperature the same for you guys which are running Ubuntu 10.10 on a laptop?

    Read the article

  • My processor is not detected intel core 2 duo

    - by walid
    My processor is not detected intel core 2 duo When I type $uname -m -p I get this i686 unknown I have Ubuntu 10.10 netbook remix but the cat /proc/cpuinfo gives right identification of two processors as below processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.53 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: The problem is with programs that uses more than one core like virtualbox and bitcoin which refuses to use more than one core Is there anythign wrong or anything that I can do? My installation is from a live usb iso on a USB

    Read the article

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • Blank Screen at boot Ubuntu 12.04 - nvidia-current - Macbook Air 3,2

    - by soulnafein
    I've installed nvidia-current using the Additional Drivers application in Ubuntu 12.04. I need those drivers so I can use accelerated WebGL. After installing the drivers, and rebooting X fails to start and I have a frozen system/dark screen. Below is the content of Xorg.0.log How can I fix this problem? [ 4.666] X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 4.666] X Protocol Version 11, Revision 0 [ 4.666] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 4.666] Current Operating System: Linux david-macbook-air 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 4.666] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=b3d5ae2a-72af-4ef9-b775-0d40b5f80f9b ro quiet splash vt.handoff=7 [ 4.666] Build Date: 29 August 2012 12:12:33AM [ 4.666] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 4.666] Current version of pixman: 0.24.4 [ 4.666] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 4.666] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 4.666] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Dec 13 10:18:02 2012 [ 4.668] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 4.668] (==) No Layout section. Using the first Screen section. [ 4.668] (==) No screen section available. Using defaults. [ 4.668] (**) |-->Screen "Default Screen Section" (0) [ 4.668] (**) | |-->Monitor "<default monitor>" [ 4.668] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 4.668] (==) Automatically adding devices [ 4.668] (==) Automatically enabling devices [ 4.668] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 4.668] Entry deleted from font path. [ 4.668] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 4.668] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 4.669] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 4.669] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 4.669] (II) Loader magic: 0x7f6222467b00 [ 4.669] (II) Module ABI versions: [ 4.669] X.Org ANSI C Emulation: 0.4 [ 4.669] X.Org Video Driver: 11.0 [ 4.669] X.Org XInput driver : 16.0 [ 4.669] X.Org Server Extension : 6.0 [ 4.670] (--) PCI:*(0:2:0:0) 10de:08a3:106b:00d3 rev 162, Mem @ 0x92000000/16777216, 0x80000000/268435456, 0x90000000/33554432, I/O @ 0x00001000/128, BIOS @ 0x????????/131072 [ 4.670] (II) Open ACPI successful (/var/run/acpid.socket) [ 4.670] (II) LoadModule: "extmod" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 4.671] (II) Module extmod: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension MIT-SCREEN-SAVER [ 4.671] (II) Loading extension XFree86-VidModeExtension [ 4.671] (II) Loading extension XFree86-DGA [ 4.671] (II) Loading extension DPMS [ 4.671] (II) Loading extension XVideo [ 4.671] (II) Loading extension XVideo-MotionCompensation [ 4.671] (II) Loading extension X-Resource [ 4.671] (II) LoadModule: "dbe" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 4.671] (II) Module dbe: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension DOUBLE-BUFFER [ 4.671] (II) LoadModule: "glx" [ 4.671] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 4.869] (II) Module glx: vendor="NVIDIA Corporation" [ 4.869] compiled for 4.0.2, module version = 1.0.0 [ 4.869] Module class: X.Org Server Extension [ 4.869] (II) NVIDIA GLX Module 295.40 Thu Apr 5 21:57:38 PDT 2012 [ 4.869] (II) Loading extension GLX [ 4.869] (II) LoadModule: "record" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 4.870] (II) Module record: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.13.0 [ 4.870] Module class: X.Org Server Extension [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension RECORD [ 4.870] (II) LoadModule: "dri" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 4.870] (II) Module dri: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.0.0 [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension XFree86-DRI [ 4.870] (II) LoadModule: "dri2" [ 4.871] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 4.871] (II) Module dri2: vendor="X.Org Foundation" [ 4.871] compiled for 1.11.3, module version = 1.2.0 [ 4.871] ABI class: X.Org Server Extension, version 6.0 [ 4.871] (II) Loading extension DRI2 [ 4.871] (==) Matched nvidia as autoconfigured driver 0 [ 4.871] (==) Matched nouveau as autoconfigured driver 1 [ 4.871] (==) Matched nv as autoconfigured driver 2 [ 4.871] (==) Matched vesa as autoconfigured driver 3 [ 4.871] (==) Matched fbdev as autoconfigured driver 4 [ 4.871] (==) Assigned the driver to the xf86ConfigLayout [ 4.871] (II) LoadModule: "nvidia" [ 4.871] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.887] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.887] compiled for 4.0.2, module version = 1.0.0 [ 4.887] Module class: X.Org Video Driver [ 4.892] (II) LoadModule: "nouveau" [ 4.894] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.894] (II) Module nouveau: vendor="X.Org Foundation" [ 4.894] compiled for 1.11.3, module version = 1.0.2 [ 4.894] Module class: X.Org Video Driver [ 4.894] ABI class: X.Org Video Driver, version 11.0 [ 4.894] (II) LoadModule: "nv" [ 4.895] (WW) Warning, couldn't open module nv [ 4.895] (II) UnloadModule: "nv" [ 4.895] (II) Unloading nv [ 4.895] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.895] (II) LoadModule: "vesa" [ 4.895] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.896] (II) Module vesa: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 2.3.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (II) LoadModule: "fbdev" [ 4.896] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.896] (II) Module fbdev: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 0.4.2 [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (==) Matched nvidia as autoconfigured driver 0 [ 4.896] (==) Matched nouveau as autoconfigured driver 1 [ 4.896] (==) Matched nv as autoconfigured driver 2 [ 4.896] (==) Matched vesa as autoconfigured driver 3 [ 4.896] (==) Matched fbdev as autoconfigured driver 4 [ 4.896] (==) Assigned the driver to the xf86ConfigLayout [ 4.896] (II) LoadModule: "nvidia" [ 4.896] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.896] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.896] compiled for 4.0.2, module version = 1.0.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] (II) UnloadModule: "nvidia" [ 4.896] (II) Unloading nvidia [ 4.896] (II) Failed to load module "nvidia" (already loaded, 32610) [ 4.896] (II) LoadModule: "nouveau" [ 4.897] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.897] (II) Module nouveau: vendor="X.Org Foundation" [ 4.897] compiled for 1.11.3, module version = 1.0.2 [ 4.897] Module class: X.Org Video Driver [ 4.897] ABI class: X.Org Video Driver, version 11.0 [ 4.897] (II) UnloadModule: "nouveau" [ 4.897] (II) Unloading nouveau [ 4.897] (II) Failed to load module "nouveau" (already loaded, 32610) [ 4.897] (II) LoadModule: "nv" [ 4.897] (WW) Warning, couldn't open module nv [ 4.897] (II) UnloadModule: "nv" [ 4.897] (II) Unloading nv [ 4.897] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.897] (II) LoadModule: "vesa" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.898] (II) Module vesa: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 2.3.0 [ 4.898] Module class: X.Org Video Driver [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "vesa" [ 4.898] (II) Unloading vesa [ 4.898] (II) Failed to load module "vesa" (already loaded, 0) [ 4.898] (II) LoadModule: "fbdev" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.898] (II) Module fbdev: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 0.4.2 [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "fbdev" [ 4.898] (II) Unloading fbdev [ 4.899] (II) Failed to load module "fbdev" (already loaded, 0) [ 4.899] (II) NVIDIA dlloader X Driver 295.40 Thu Apr 5 21:38:35 PDT 2012 [ 4.899] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 4.899] (II) NOUVEAU driver Date: Wed Sep 12 13:42:43 2012 +0200 [ 4.899] (II) NOUVEAU driver for NVIDIA chipset families : [ 4.899] RIVA TNT (NV04) [ 4.899] RIVA TNT2 (NV05) [ 4.899] GeForce 256 (NV10) [ 4.899] GeForce 2 (NV11, NV15) [ 4.899] GeForce 4MX (NV17, NV18) [ 4.899] GeForce 3 (NV20) [ 4.900] GeForce 4Ti (NV25, NV28) [ 4.900] GeForce FX (NV3x) [ 4.900] GeForce 6 (NV4x) [ 4.900] GeForce 7 (G7x) [ 4.900] GeForce 8 (G8x) [ 4.900] GeForce GTX 200 (NVA0) [ 4.900] GeForce GTX 400 (NVC0) [ 4.900] (II) VESA: driver for VESA chipsets: vesa [ 4.900] (II) FBDEV: driver for framebuffer: fbdev [ 4.900] (++) using VT number 7 [ 4.902] (II) Loading sub module "fb" [ 4.902] (II) LoadModule: "fb" [ 4.902] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.902] (II) Module fb: vendor="X.Org Foundation" [ 4.902] compiled for 1.11.3, module version = 1.0.0 [ 4.902] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.902] (II) Loading sub module "wfb" [ 4.902] (II) LoadModule: "wfb" [ 4.903] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.905] (II) Module wfb: vendor="X.Org Foundation" [ 4.905] compiled for 1.11.3, module version = 1.0.0 [ 4.905] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.905] (II) Loading sub module "ramdac" [ 4.905] (II) LoadModule: "ramdac" [ 4.905] (II) Module "ramdac" already built-in [ 4.907] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.912] (WW) Falling back to old probe method for vesa [ 4.912] (WW) Falling back to old probe method for fbdev [ 4.912] (II) Loading sub module "fbdevhw" [ 4.912] (II) LoadModule: "fbdevhw" [ 4.912] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 4.912] (II) Module fbdevhw: vendor="X.Org Foundation" [ 4.912] compiled for 1.11.3, module version = 0.0.2 [ 4.912] ABI class: X.Org Video Driver, version 11.0 [ 4.912] (II) NVIDIA(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 4.912] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 [ 4.912] (==) NVIDIA(0): RGB weight 888 [ 4.912] (==) NVIDIA(0): Default visual is TrueColor [ 4.912] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 4.912] (**) NVIDIA(0): Enabling 2D acceleration [ 5.442] (EE) NVIDIA(0): Failed to initialize the display subsystem for the NVIDIA [ 5.442] (EE) NVIDIA(0): graphics device! [ 5.442] (EE) NVIDIA(0): Failed to get supported display device(s) [ 5.442] (EE) NVIDIA(0): Failed to initialize dac HAL [ 5.442] (II) UnloadModule: "nvidia" [ 5.442] (II) Unloading nvidia [ 5.442] (II) UnloadModule: "wfb" [ 5.442] (II) Unloading wfb [ 5.442] (II) UnloadModule: "fb" [ 5.443] (II) Unloading fb [ 5.443] (EE) Screen(s) found, but none have a usable configuration. [ 5.443] Fatal server error: [ 5.443] no screens found [ 5.443] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 5.443] Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 5.443] [ 5.447] ddxSigGiveUp: Closing log [ 5.447] Server terminated with error (1). Closing log file.

    Read the article

  • Ubuntu 12.04 LTS 32bit does not detect 4Gb ram

    - by David
    I have recently installed 4Gb of ram for an existing 12.04 32bit Ubuntu. It's not being recognised, only 3.2Gb is showing, See: administrator@Root2:~$ free total used free shared buffers cached Mem: 3355256 1251112 2104144 0 48664 391972 -/+ buffers/cache: 810476 2544780 System is PAE capable, See: administrator@Root2:~$ grep --color=always -i PAE /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts The system us fully patched and tried to run manual PAE upgrade, See: administrator@Root2:~$ sudo apt-get install linux-generic-pae linux-headers-generic-pae [sudo] password for administrator: Reading package lists... Done Building dependency tree Reading state information... Done linux-generic-pae is already the newest version. linux-headers-generic-pae is already the newest version. The following packages were automatically installed and are no longer required: language-pack-zh-hans language-pack-kde-en language-pack-kde-zh-hans language-pack-kde-en-base kde-l10n-engb kde-l10n-zhcn language-pack-zh-hans-base firefox-locale-zh-hans language-pack-kde-zh-hans-base Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I am not sure what else to try to recognise the full physical memory installed other than loading 64bit. Any thoughts? Thanks! output of uname -r administrator@Root2:~$ uname -r 3.2.0-24-generic-pae

    Read the article

  • Why does my system use so much cache?

    - by Dave M G
    Previously, on my desktop computer running Ubuntu 14.04, I had 4GB RAM, which I thought should be plenty. However, after being on for a while, my computer would seem to get slow. I have a system resource monitor app in my Gnome panel, which I assume represents the available RAM (?). It shows a dark green area as being "Memory", and a light green area as "Cache". The "Cache" would slowly grow until it filled the whole graph, and then programs would get slow to load, or it would take a while to switch programs. I could alleviate the problem somewhat with this command, but eventually the computer cache fills up again, so it's only a bandaid: sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches" So, I figured I'd get more RAM, so I replaced one 2GB stick with an 8GB stick, and now I have 10 GB ram. And my "cache" still slowly maxes out and my computer slows as a result. Also, sometimes the computer starts out with "cache" maxed when I first boot and log in. Not always though, I don't know if there's a pattern that determines why it happens. Why is Ubuntu using up so much cache? Is 10GB not enough for Ubuntu? Here's what my system monitor looks like in my Gnome panel. The middle square shows RAM usage. The light green area is the "cache": This is my memory and swap history, which doesn't seem to include any information about "cache". I realize at this point I'm not totally clear on the difference between "cache" and "swap":

    Read the article

  • How should developers handle subpar working conditions? [closed]

    - by ivar
    I have been working in my current job for less than a year and at the beginning didn't have the courage to say anything about the things that bothered me. Now I'm a bit fed up and need things to get better. The first problem is not random but I'll mention it anyway. We are running out of space so every new employee gets a smaller table. We are promised that the space problem will be fixed soon. Almost every employee has a different keyboard, mouse, headphones (if any). Mine are $10 keyboard, some random cheap mouse and some random crappy headphones with a mic. All these were used and dirty when I got them. The number of monitor is 1-3 and with different sizes. I have 2 nice monitors and can't complain but some are given 1 small monitor. When it's their first job they don't have the guts to ask for 2 even if most others have 2. Nobody seems to care too. Project manager asked if it's ok? He obviously said he can handle the 1 small one. Then the manager said you can go ask for 1 more. I'm watching this and think go and ask where? The company is trying to hire more people but is not doing much after the person has signed the contract. We are put in one room that is open to the hallway and it's super noisy. Almost like a zoo at times. Even if nobody is talking the crappy keyboards make too much noise. Is this normal? Am I too negative and should I just do my job with what I was given? Should I demand better things? Should the company have some system that everybody gets things in some price range?

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >