Search Results

Search found 29820 results on 1193 pages for 'default implementation'.

Page 324/1193 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database

    - by pinaldave
    I have written a few articles recently on the subject of guest account. Here’s a quick list of these articles: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in. SQL SERVER – Detecting guest User Permissions – guest User Access Status One of the advices which I gave in all the three blog posts was: Disable the guest user in the user-created database. Additionally, I have mentioned that one should let the user account become enabled in MSDB database. I got many questions asking if there is any specific reason why this should be kept enabled, questions like, “What is the reason that MSDB database needs guest user?” Honestly, I did not know that the concept of the guest user will create so much interest in the readers. So now let’s turn this blog post into questions and answers format. Q: What will happen if the guest user is disabled in MSDB database? A:  Lots of bad things will happen. Error 916 - Logins can connect to this instance of SQL Server but they do not have specific permissions in a database to receive the permissions of the guest user. Q: How can I determine if the guest user is enabled or disabled for any specific database? A: There are many ways to do this. Make sure that you run each of these methods with the context of the database. For an example for msdb database, you can run the following code: USE msdb; SELECT name, permission_name, state_desc FROM sys.database_principals dp INNER JOIN sys.server_permissions sp ON dp.principal_id = sp.grantee_principal_id WHERE name = 'guest' AND permission_name = 'CONNECT' There are many other methods to detect the guest user status. Read them here: Detecting guest User Permissions – guest User Access Status Q: What is the default status of the guest user account in database? A: Enabled in master, TempDb, and MSDB. Disabled in model database. Q: Why is the default status of the guest user disabled in model database? A: It is not recommended to enable the guest in user database as it can introduce serious security threat. It can seriously damage the database if configured incorrectly. Read more here: Disable Guest Account – Serious Security Issue Q: How to disable guest user? A: REVOKE CONNECT FROM guest Q: How to enable guest user? A: GRANT CONNECT TO guest Did I miss any critical question in the list? Please leave your question as a comment and I will add it to this list. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Android SDK PATH Error: doesn´t find home directory

    - by THeo Oliveira
    I use Ubuntu 12.10 and I tried to follow this guide Install android sdk and eclipse in Ubuntu 12.04 but I keep getting this error: Error: The AVD manager normally uses the user's profile directory to store AVD files. However it failed to find the default profile directory. To fix this, please set the environment variable ANDROID_SDK_HOME to a valid path such as ¨~¨. Any ideas how can I fix this?

    Read the article

  • JRockit R28/JRockit Mission Control 4.0 is out!

    - by Marcus Hirt
    The next major release of JRockit is finally out! Here are some highlights: Includes the all new JRockit Flight Recorder – supersedes the old JRockit Runtime Analyser. The new flight recorder is inspired by the “black box” in airplanes. It uses a highly efficient recording engine and thread local buffers to capture data about the runtime and the application running in the JVM. It can be configured to always be on, so that whenever anything “interesting” happens, data can be dumped for some time back. Think of it as your own personal profiling time machine. Automatic shortest path calculation in Memleak – no longer any need for running around in circles when trying to find your way back to a thread root from an instance. Memleak can now show class loader related information and split graphs on a per class loader basis. More easily configured JMX agent – default port for both RMI Registry and RMI Server can be configured, and is by default the same, allowing easier configuration of firewalls. Up to 64 GB (was 4GB) compressed references. Per thread allocation profiling in the Management Console. Native Memory Tracking – it is now possible to track native memory allocations with very high resolution. The information can either be accessed using JRCMD, or the dedicated Native Memory Tracking experimental plug-in for the Management Console (alas only available for the upcoming 4.0.1 release). JRockit can now produce heap dumps in HPROF format. Cooperative suspension – JRockit is no longer using system signals for stopping threads, which could lead to hangs if signals were lost or blocked (for example bad NFS shares). Now threads check periodically to see if they are suspended. VPAT/Section 508 compliant JRMC – greatly improved keyboard navigation and screen reader support. See New and Noteworthy for more information. JRockit Mission Control 4.0.0 can be downloaded from here: http://www.oracle.com/technology/software/products/jrockit/index.html <shameless ad> There is even a book to go with JRMC 4.0.0/JRockit R28! http://www.packtpub.com/oracle-jrockit-the-definitive-guide/book/ </shameless ad>

    Read the article

  • Administering Team Foundation Server 2010 Class resource links

    - by John Alexander
    Here are the resource links for the Administering Team Foundation Server 2010 Class from last week in Minneapolis.  Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 RTM virtual machine for Microsoft® Virtual PC 2007 SP1 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=5e13b15a-fd74-4cd7-b53e-bdf9456855bd Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 RTM virtual machine for Windows Virtual PC http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e Microsoft® Visual Studio® 2010 and Team Foundation Server® 2010 RTM virtual machine for Windows Server 2008 Hyper-V http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc Customizable process guidance http://blogs.msdn.com/b/allclark/archive/2010/08/12/customizable-process-guidance.aspx The 5 most read Visual Studio ALM help topics on MSDN http://blogs.msdn.com/b/allclark/archive/2010/11/12/the-5-most-read-visual-studio-alm-help-topics-on-msdn.aspx Inside TFS http://visualstudiomagazine.com/Articles/List/Inside-TFS.aspx Testing Topics http://msdn.microsoft.com/en-us/library/dd286594.aspx Blogs http://community.accentient.com http://geekswithblogs.net Branching Guide http://tfsbranchingguideiii.codeplex.com/ Great VSTS blog http://geekswithblogs.net/hinshelm/Default.aspx My Blog :D http://geekswithblogs.net/jalexander/Default.aspx Visual Studio Forums http://bit.ly/fE16u3 TFS Migration and Integration Solutions http://bit.ly/cLaBnT TFS Migration and Integration Tools (VS ALM Rangers) http://bit.ly/9tHWdG TFS Migration and Integration Platform (CodePlex) http://tfsintegration.codeplex.com Team Foundation Server SDK http://code.msdn.microsoft.com/TfsSdk Migrate and Integration Forum http://bit.ly/f4Lnps Team Foundation Server Widgets http://www.tfswidgets.com TFS Sdk http://code.msdn.microsoft.com/TfsSdk TFS Migration and Integration Solutions http://bit.ly/cLaBnT TFS Integration Tools Forum http://bit.ly/f4Lnps TFS Integration Tools http://bit.ly/9tHWdG TFS Integration Platform http://tfsintegration.codeplex.com VS Upgrade Guide http://vs2010upgradeguide.codeplex.com Updating an Upgraded Team Project to Access New Features http://bit.ly/9cCcMP Team Foundation Power Tools http://bit.ly/dfNVQk Team Foundation Administration Tool http://tfsadmin.codeplex.com Using Team Foundation Server Command-Line Tools http://bit.ly/hCyozJ Changing Groups and Permissions with TFSSecurity http://bit.ly/esIjgw Unofficial Prep guide for TFS 2010 Administration Exam (70-512) http://geekswithblogs.net/enriquelima/archive/2010/07/21/unofficial-prep-guide-for-tfs-2010-administration-exam-70-512.aspx Another Prep Guide http://bit.ly/bpO30R Professional Application Lifecycle Management with VS 2010 Book http://bit.ly/9rCIRj Search CodePlex for TFS related apps http://www.codeplex.com/site/search Visual Studio Gallery http://visualstudiogallery.com TFS Widgets http://tfswidgets.com Migrate from Visual SourceSafe http://bit.ly/8XPSRh Team Foundation Server MSSCCI Provider 2010 http://bit.ly/dst1OQ Attrice TFS Sidekicks www.attrice.info/cm/tfs Hosted TFS http://bit.ly/cMZdvp Manually Processing the Team Foundation Server 2010 Data Warehouse and Analysis Services Database http://bit.ly/aG5oEh TFS 2005, 2008 and 2010 Compatibility http://shrinkster.com/1dhj

    Read the article

  • Daily tech links for .net and related technologies - Apr 8-10, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 8-10, 2010 Web Development Using RIA DomainServices with ASP.NET and MVC 2 - geekswithblogs Using AntiXss As The Default Encoder For ASP.NET - Phil Haack New Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2) - Scott Gu Multi-Step Processing in ASP.NET - Dave M. Bush MvcContrib - Portable Area – Visual Studio project template - erichexter Encoding/Decoding URIs and HTML in the .NET 4 Client Profile - Pete Brown Jon Takes Five...(read more)

    Read the article

  • Add an Opera Style Status Bar to Firefox

    - by Asian Angel
    Anyone who has used Opera will be familiar with the information presented for the webpage that is currently loading in the browser (i.e. number of images loaded). If you would like to have that same functionality in Firefox then join us as we look at the Extended Statusbar extension. Before Here is the default setup for Firefox…not a lot of information available to indicate exactly how much of the webpage has already loaded versus what has not. For some people this is enough but what if you like more details? Extended Statusbar in Action You may be curious about the information that the Extended Statusbar extension will provide. The information includes: Percentage of the webpage loaded The number of images loaded Bytes downloaded Average download speed The load time After emptying the cache we once again reloaded the HTG homepage. The default style/mode is “Classic Style” and the “webpage load information” will be displayed within your “Status Bar” as shown here. The information available after the webpage finished loading in “Classic Style”. If you prefer “Slim Mode” this is how your “Status Bar” should look afterwards…very condensed. For those preferring the “New Style” a temporary addition will appear above your regular “Status Bar” and disappear just a few seconds after the webpage has fully loaded (unless changed in the “Settings”). Settings The “Settings” are set up in two different ways. For those who prefer to use the “Classic Style & Slim Mode” these are the options available to you. If you prefer the “New Style” then you will have a whole different set of options available. Notice that you can exclude certain webpages and set a custom style if desired. Conclusion If you have been wanting to add Opera style webpage loading information to your “Status Bar” then you should definitely give this extension a try. Links Download the Extended Statusbar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Move the Progress Bar to the Tabs in FirefoxSet the Speed Dial as the Opera Startup PageAuto-Hide Your Cluttered Firefox Status Bar ItemsSimplify Text Copying & Pasting in Firefox with AutoCopyScan Files for Viruses Before You Download With Dr.Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 If it were only this easy SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Now Available &ndash; Windows Azure SDK 1.6

    - by Shaun
    Microsoft has just announced the Windows Azure SDK 1.6 and the Windows Azure Tools for Visual Studio 1.6. Now people can download the latest product through the WebPI. After you downloaded and installed the SDK you will find that The SDK 1.6 can be stayed side by side with the SDK 1.5, which means you can still using the 1.5 assemblies. But the Visual Studio Tools would be upgraded to 1.6. Different from the previous SDK, in this version it includes 4 components: Windows Azure Authoring Tools, Windows Azure Emulators, Windows Azure Libraries for .NET 1.6 and the Windows Azure Tools for Microsoft Visual Studio 2010. There are some significant upgrades in this version, which are Publishing Enhancement: More easily connect to the Windows Azure when publish your application by retrieving a publish setting file. It will let you configure some settings of the deployment, without getting back to the developer portal. Multi-profiles: The publish settings, cloud configuration files, etc. will be stored in one or more MSBuild files. It will be much easier to switch the settings between vary build environments. MSBuild Command-line Build Support. In-Place Upgrade Support.   Publishing Enhancement So let’s have a look about the new features of the publishing. Just create a new Windows Azure project in Visual Studio 2010 with a MVC 3 Web Role, and right-click the Windows Azure project node in the solution explorer, then select Publish, we will find the new publish dialog. In this version the first thing we need to do is to connect to our Windows Azure subscription. Click the “Sign in to download credentials” link, we will be navigated to the login page to provide the Live ID. The Windows Azure Tool will generate a certificate file and uploaded to the subscriptions those belong to us. Then we will download a PUBLISHSETTINGS file, which contains the credentials and subscriptions information. The Visual Studio Tool will generate a certificate and deployed to the subscriptions you have as the Management Certificate. The VS Tool will use this certificate to connect to the subscription in the next step. In the next step, I would back to the Visual Studio (the publish dialog should be stilling opened) and click the Import button, select the PUBLISHSETTINGS file I had just downloaded. Then all my subscriptions will be shown in the dropdown list. Select a subscription that I want the application to be published and press the Next button, then we can select the hosted service, environment, build configuration and service configuration shown in the dialog. In this version we can create a new hosted service directly here rather than go back to the developer portal. Just select the <Create New …> item in the hosted service. What we need to do is to provide the hosted service name and the location. Once clicked the OK, after several seconds the hosted service will be established. If we went to the developer portal we will find the new hosted service in my subscription. a) Currently we cannot select the Affinity Group when create a new hosted service through the Visual Studio Publish dialog. b) Although we can specify the hosted service name and DNS prefixing through the developer portal, we cannot do so from the VS Tool, which means the DNS prefixing would be the same as what we specified for the hosted service name. For example, we specified our hosted service name as “Sdk16Demo”, so the public URL would be http://sdk16demo.cloudapp.net/. After created a new hosted service we can select the cloud environment (production or staging), the build configuration (release or debug), and the service configuration (cloud or local). And we can set the Remote Desktop by check the related checkbox as well. One thing should be note is that, in this version when we set the Remote Desktop settings we don’t need to specify a certificate by default. This is because the Visual Studio will generate a new certificate for us by default. But we can still specify an existing certificate for RDC, by clicking the “More Options” button. Visual Studio Tool will create another certificate for the Remote Desktop connection. It will NOT use the certificate that managing the subscription. We also can select the “Advanced Settings” page to specify the deployment label, storage account, IntelliTrace and .NET profiling information, etc.. Press Next button, the dialog will display all settings I had just specified and it will save them as a new profile. The last step is to click the Publish button. Since we enabled the Remote Desktop feature, the first step of publishing was uploading the certificate. And then it will verify the storage account we specified and upload the package, then finally created the website in Windows Azure.   Multi-Profiles After published, if we back to the Visual Studio we can find a AZUREPUBXML file under the Profiles folder in the Azure project. It includes all settings we specified before. If we publish this project again, we can just use the current settings (hosted service, environment, RDC, etc.) from this profile without input them again. And this is very useful when we have more than one deployment settings. For example it would be able to have one AZUREPUBXML profile for deploying to testing environment (debug building, less roles with RDC and IntelliTrace) and one for production (release building, more roles but without IntelliTrace).   In-Place Upgrade Support Let’s change some codes in the MVC pages and click the Publish menu from the azure project node. No need to specify any settings,  here we can use the pervious settings by loading the azure profile file (AZUREPUBXML). After clicked the Publish button the VS Tool brought a dialog to us to indicate that there’s a deployment available in the hosted service environment, and prompt to REPLACE it or not. Notice that in this version, the dialog tool said “replace” rather than “delete”, which means by default the VS Tool will use In-Place Upgrade when we deploy to a hosted service that has a deployment already exist. After click Yes the VS Tool will upload the package and perform the In-Place Upgrade. If we back to the developer portal we can find that the status of the hosted service was turned to “Updating…”. But in the previous SDK, it will try to delete the whole deployment and publish a new one.   Summary When the Microsoft announced the features that allows the changing VM size via In-Place Upgrade, they also mentioned that in the next few versions the user experience of publishing the azure application would be improved. The target was trying to accomplish the whole publish experience in Visual Studio, which means no need to touch developer portal any more. In the SDK 1.6 we can see from the new publish dialog, as a developer we can do the whole process, includes creating hosted service, specifying the environment, configuration, remote desktop, etc. values without going back the the developer portal.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Control Your Favorite Music Player from Firefox

    - by Asian Angel
    Do you love listening to music while you browse? Now you can access and control your favorite music player directly from Firefox with the FoxyTunes extension. FoxyTunes in Action Once you have installed the extension and restarted Firefox you will see the FoxyTunes Toolbar located in the “Status Bar”. The default media app is Windows Media Player but can be easily changed. Here are the buttons/items available with the default settings: Search, FoxyTunes Main Menu, Show Player, Select Player, Previous Track, Play, Next Track, Mute On/Off, Volume, Play File, Twitty Tunes, Foxy Tunes Search/Explore, Open FoxyTunes Planet, & Toggle Visibility/Drag and drop to move. Note: You can hide or show individual buttons/items using the “FoxyTunes Menus”. Curious about the media players that FoxyTunes works with? Here is a complete listing…that definitely looks terrific! Notice that the currently selected media app is “bold and blue”. For our example we chose Spotify which we have previously covered. Keep in mind that you may or may not need to have your favorite media app open prior to “starting” FoxyTunes up (i.e. Play Button). Here is a good look at the “FoxyTunes Main Menu” and “Controls Sub-Menu”. The “Extras Menu”…if you click on skins you will be taken to the FoxyTunes Skins webpage. Here is a closer look into the “Configurations Menu” and one of the sub-menus. You do not need to look for options in the “Add-ons Manager Window”…everything you need is contained in these menus. If you do not like having FoxyTunes in the “Status Bar” you can easily drag and drop it to another toolbar. You can also condense the appearance of FoxyTunes using the small “triangle buttons” that are located in different spots throughout the “FoxyTunes Toolbar”. With just a click or two you can greatly reduce its’ impact on your UI. Conclusion If you love listening to music while browsing then the FoxyTunes extension will let you take care of everything right from your browser. Links Download the FoxyTunes extension (Mozilla Add-ons) Download the FoxyTunes extension (Extension Homepage) *Note: FoxyTunes add-ins for Internet Explorer and Yahoo! Messenger available here. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add Files5 Awesome Music Desktop Gadgets for Vista and Windows 7Make Windows Media Player Automatically Open in Mini Player ModeSearch for Install Packages from the Ubuntu Command LineInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Setup and Use SpecFlow BDD with DevExpress XAF

    - by Patrick Liekhus
    Let’s get started with using the SpecFlow BDD syntax for writing tests with the DevExpress XAF EasyTest scripting syntax.  In order for this to work you will need to download and install the prerequisites listed below.  Once they are installed follow the steps outlined below and enjoy. Prerequisites Install the following items: DevExpress eXpress Application Framework (XAF) found here SpecFlow found here Liekhus BDD/XAF Testing library found here Assumptions I am going to assume at this point that you have created your XAF application and have your Module, Win.Module and Win ready for usage.  You should have also set any attributes and/or settings as you see fit. Setup So where to start. Create a new testing project within your solution. I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e. AlbumManager.FunctionalTests). Add the following references to your project.  It should look like the reference list below. DevExpress.Data.v11.x DevExpress.Persistent.Base.v11.x DevExpress.Persistent.BaseImpl.v11.x DevExpress.Xpo.v11.2 Liekhus.Testing.BDD.Core Liekhus.Testing.BDD.DevExpress TechTalk.SpecFlow TestExecutor.v11.x (found in %Program Files%\DevExpress 2011.x\eXpressApp Framework\Tools\EasyTest Right click the TestExecutor reference and set the “Copy Local” setting to True.  This forces the TestExecutor executable to be available in the bin directory which is where the EasyTest script will be executed further down in the process. Add an Application Configuration File (app.config) to your test application.  You will need to make a few modifications to have SpecFlow generate Microsoft style unit tests.  First add the section handler for SpecFlow and then set your choice of testing framework.  I prefer MS Tests for my projects. Add the EasyTest configuration file to your project.  Add a new XML file and call it Config.xml. Open the properties window for the Config.xml file and set the “Copy to Ouput Directory” to “Copy Always”. You will setup the Config file according to the specifications of the EasyTest library my mapping to your executable and other settings.  You can find the details for the configuration of EasyTest here.  My file looks like this Create a new folder in your test project called “StepDefinitions”.  Add a new SpecFlow Step Definition file item under the StepDefinitions folder.  I typically call this class StepDefinition.cs. Have your step definition inherit from the Liekhus.Testing.BDD.DevExpress.StepDefinition class.  This will give you the default behaviors for your test in the next section. OK.  Now that we have done this series of steps, we will work on simplifying this.  This is an early preview of this new project and is not fully ready for consumption.  If you would like to experiment with it, please feel free.  Our goals are to make this a installable project on it’s own with it’s own project templates and default settings.  This will be coming in later versions.  Currently this project is in Alpha release. Let’s write our first test Remove the basic test that is created for you. We will not use the default test but rather create our own SpecFlow “Feature” files. Add a new item to your project and select the SpecFlow Feature file under C#. Name your feature file as you do your class files after the test they are performing. Writing a feature file uses the Cucumber syntax of Given… When… Then.  Think of it in these terms.  Givens are the pre-conditions for the test.  The Whens are the actual steps for the test being performed.  The Thens are the verification steps that confirm your test either passed or failed.  All of these steps are generated into a an EasyTest format and executed against your XAF project.  You can find more on the Cucumber syntax by using the Secret Ninja Cucumber Scrolls.  This document has several good styles of tests, plus you can get your fill of Chuck Norris vs Ninjas.  Pretty humorous document but full of great content. My first test is going to test the entry of a new Album into the application and is outlined below. The Feature section at the top is more for your documentation purposes.  Try to be descriptive of the test so that it makes sense to the next person behind you.  The Scenario outline is described in the Ninja Scrolls, but think of it as test template.  You can write one test outline and have multiple datasets (Scenarios) executed against that test.  Here are the steps of my test and their descriptions Given I am starting a new test – tells our test to create a new EasyTest file And (Given) the application is open – tells EasyTest to open our application defined in the Config.xml When I am at the “Albums” screen – tells XAF to navigate to the Albums list view And (When) I click the “New:Album” button – tells XAF to click the New Album button on the ribbon And (When) I enter the following information – tells XAF to find the field on the screen and put the value in that field And (When) I click the “Save and Close” button – tells XAF to click the “Save and Close” button on the detail window Then I verify results as “user” – tells the testing framework to execute the EasyTest as your configured user Once you compile and prepare your tests you should see the following in your Test View.  For each of your CreateNewAlbum lines in your scenarios, you will see a new test ready to execute. From here you will use your testing framework of choice to execute the test.  This in turn will execute the EasyTest framework to call back into your XAF application and test your business application. Again, please remember that this is an early preview and we are still working out the details.  Please let us know if you have any comments/questions/concerns. Thanks and happy testing.

    Read the article

  • Oracle Enterprise Manager 12c Integration With Oracle Enterprise Manager Ops Center 11g

    - by Scott Elvington
    In a blog entry earlier this year, we announced the availability of the Ops Center 11g plug-in for Enterprise Manager 12c. In this article I will walk you through the process of deploying the plug-in on your existing Enterprise Manager agents and show you some of the capabilities the plug-in provides. We'll also look at the integration from the Ops Center perspective. I will show you how to set up the connection to Enterprise Manager and give an overview of the information that is available. Installing and Configuring the Ops Center Plug-in The plug-in is available for download from the Self Update page (Setup ? Extensibility ? Self Update). The plug-in name is “Ops Center Infrastructure stack”. Once you have downloaded the plug-in you can navigate to the Plug-In management page (Setup ? Extensibility ? Plug-ins) to begin deployment. The plug-in must first be deployed on the Management Server. You will need to provide the repository password of the SYS user in order to deploy the plug-in to the Management Server. There are a few pre-requisites that need to be completed on the Ops Center side before the plug-in can be deployed and configured on the desired Enterprise Manager agents. Any servers, whether physical or virtual, for which you wish to see metrics and alerts need to be managed by Ops Center. This means that the Operating System needs to have an Ops Center management agent installed as a minimum. The plug-in can provide even more value when Ops Center is also managing the other “layers of the stack”, for example the service processor, the blade chassis or the XSCF of an M-Series server. The more information that Ops Center has about the stack, the more information that will be visible within Enterprise Manager via the plug-in. In order to access the information within Ops Center, the plug-in requires a user to connect as. This user does not require any particular Ops Center permissions or roles, it simply needs to exist. You can create a specific “EMPlugin” user within Ops Center or use an existing user. Oracle recommends creating a specific, non-privileged user account within Ops Center for this purpose. From the Ops Center Administration section, select Enterprise Controller, click the Users tab and finally click the Add User icon to create the desired user account. For the purpose of this article I have discovered and managed the OS and service processor of the server where my Enterprise Manager 12c installation is hosted. With the plug-in deployed to the Management Server and the setup done within Ops Center, we're now ready to deploy the plug-in to the agents and configure the targets to communicate with the Ops Center Enterprise Controller. From the Setup menu select Add Targets then Add Targets Manually. Select the bottom radio button “Add Targets Manually by Specifying Target Monitoring Properties”, select Infrastructure Stack from the Target Type dropdown and finally, select the Monitoring Agent where you wish to deploy the plug-in. Click the Add Manually.... button and fill in the details for the new target using the appropriate hostname for your Enterprise Controller and the user and password details for the plug-in access user. After the target has been added to the agent you will need to allow a few minutes for the initial data collection to complete. Once completed you can see the new target in the All Targets list. All metric collections are enabled by default except one. To enable Infrastructure Stack Alarms collection, navigate to the newly added target and then to Target ? Monitoring ? Metric and Collection Settings. There you can click the “Disabled” link under Collection Schedule to enable collection and set your desired collection frequency. By default, a Warning level alert in Ops Center will equate to a Warning level event in Enterprise Manager and a Critical alert will equate to a Critical event. This mapping can be altered in the Metric and Collection Settings also. The default incident rules in Enterprise Manager only create incidents from Critical events so keep this in mind in case you want to see incidents generated for Warning or Info level alerts from Ops Center. Also, because Enterprise Manager already monitors the OS through it's Host target type, the plug-in does not pull OS alerts from Ops Center so as to prevent duplication. In addition to alert propagation, the plug-in also provides data for several reports detailing the topology and configuration of the stack as well as any hardware sensor data that is available. These are available from the Information Publisher Reports. Navigate there from the Enterprise ? Reports menu or directly from the Infrastructure Stack target of interest. As an example, here is a sample of the Hardware Sensors report showing some of the available sensor data. The report can also be exported to a CSV file format if desired. Connecting Ops Center to Enterprise Manager Repository For an Enterprise Manager user, the plug-in provides a deeper visibility to the state of the infrastructure underlying the databases and middleware. On the Ops Center side, there is also a greater visibility to the targets running on the infrastructure. To set up the Ops Center data collection, just navigate to the Administration section and select the Grid Control link. Select the Configure/Connect action from the right-hand menu and complete the wizard forms to enable the connection to the Enterprise Manager repository and UI. Be sure to use the sysman account when configuring the database connection. Once the job completes and the initial data synchronization is done you will see new Target tabs on your OS assets. The new tab lists all the Enterprise Manager targets and any alerts, availability and performance data specific to the selected target. It is also possible to use the GoTo icon to launch the Enterprise Manager BUI in context of the specific target or alert to drill into more detail. Hopefully this brief overview of the integration between Enterprise Manager and Ops Center has provided a jumpstart to getting a more complete view of the full stack of your enterprise systems.

    Read the article

  • Acer Aspire blank screen

    - by Gerep
    I'm using Ubuntu 11.04 Natty and when I installed it I had a problem with blank screen on startup and found a solution but I have the same problem when it is inactive and goes blank screen, it won't come back, so I have to restart. Any ideas? Thanks in advance for any help. This is what solved my first problem: When you start editing the /etc/rc.local and add before exit 0: setpci -s 00:02.0 F4.B=00 Edit /etc/default/grub, edit the line GRUB_CMDLINE_LINUX_DEFAULT: GRUB_CMDLINE_LINUX_DEFAULT = "quiet splash acpi_osi = Linux" Update grub with: sudo update-grub2

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • How to Customize the Internet Explorer 8 Title Bar

    - by Mysticgeek
    If you’re looking for a way to personalize IE 8, one method is to customize the Title Bar. Here we look at a simple Registry hack that will get the job done. The Internet Explorer Title Bar is displayed on the top of the browser with the site name followed by Windows Internet Explorer by default. If you have a small office you might want to change it to the company name, or just change it something more personal at home.   Customize the IE 8 Title Bar Note: Before making any changes to the Registry, make sure to back it up. The first thing we need to do is open the Registry by typing regedit into the Search box in the Start Menu and hit Enter. With the Registry open, navigate to HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main. Then create a new String Value and name it Window Title. Right-click on the Window Title String and enter in whatever you want to display on the Title Bar in the Value data field and click OK. When you’re done, you should see the new String called Windows Title with whatever you entered in as the value. Close out of the Registry. Restart or launch Internet Explorer and you’ll now see your new text in the Title Bar. If you want to change it to something else, just go in and modify the Value data. If you want to switch it back to the default, just go back in and delete the string we created. A lot of times you’ll see corporate branding already in the title bar from your ISP or some computer company. To get rid of it, check out The Geek’s article on how to remove it. This should work with other versions of Internet Explorer as well. Similar Articles Productive Geek Tips Remove ISP Text or Corporate Branding from Internet Explorer Title BarReset All Internet Explorer 8 Settings to Fix Stability ProblemsMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPDisable Third Party Extensions in Internet ExplorerToggle Flash On or Off in Internet Explorer the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall

    Read the article

  • SOCharts: Charts by Tags

    - by abhin4v
    Screenshot I created this small app as a weekend hack. It shows the reputations, upvotes, downvotes and accepted answers for a user against the tags for the answers. About I wanted to know how may upvotes I was away from getting the bronze badge for the clojure tag. But I could not find any straightforward way of doing that. So I wrote this app (in Clojure, of course). The SO API is used for the data and the charts are created using the Google Chart API. The charts are opened in the default browser. License Licensed under EPL 1.0. Download If you have Clojure and Leiningen installed, you can simply get the code from https://gist.github.com/725331, save it as socharts.clj and then run lein repl -e "(load \"socharts\")(refer 'socharts.socharts)(-main)" for launching the Swing UI If you don't have Clojure installed, but have Java then download the standalone jar from http://dl.dropbox.com/u/5247/socharts-1.0.0-standalone.jar and run it as javaw -jar socharts-1.0.0-standalone.jar Once the UI is launched, just type your user id in the input box and press <ENTER>. It will take some time to download the data from the SO API (the progress bar shows the download progress) and then it will open the charts in your default browser. You can also run it as a command line app by running lein repl -e "(load \"socharts\")(refer 'socharts.socharts)(-main <userid>)" or java -jar socharts-1.0.0-standalone.jar <userid> where you replace <userid> with your user id. Be warned that because of a missing feature in the SO API, it will fetch the data for each question you have answered. So the maximum limit is 10000 answers (the SO API call limit). Platform All platforms with Java 1.6. Contact You can reach me at abhinav [at] abhinavsarkar [dot] net. Please report bugs/comments/suggestions as answers to this post. Code Code was written in Clojure with the UI in Swing. It is available at https://gist.github.com/725331. It's a public gist so your can fork it if you like to do some changes.

    Read the article

  • Dell XPS 15 (L502x) sound problem

    - by lauramolenaar
    I have a problem with my sound on my Dell XPS 15. First, when I had Windows 7, my sound was pretty good (I have a JBL 2.1 speaker system with Waves Maxx audio), but since I installed Ubuntu 12.04 it sounded very cheap and as if I put my laptop in a tin can. I've already tried installing alsa-hda-dkms from the alsa-daily ppa (http://ppa.launchpad.net/ubuntu-audio-dev/alsa-daily) This is my audio controller: laura@laura-XPS-L502X:~$ lspci -v | grep -A7 -i audio 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: Dell Device 04b6 Flags: bus master, fast devsel, latency 0, IRQ 51 Memory at f1c00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel I hope you can help me, and you can always ask me for more information. Result of aplay -l: **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC665 Analog [ALC665 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 1: ALC665 Digital [ALC665 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Result of aplay -L: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • How to Assign a Static IP to an Ubuntu 10.04 Desktop Computer

    - by Mysticgeek
    If you have a home network with several computers, assigning them static IP addresses can make troubleshooting easier. Today we take a look at switching from DHCP to a static IP in Ubuntu. Assign a Static IP Using Static IPs prevents address conflicts between machines and can allow easier access to them. If you have a small home network and are satisfied with the machines getting their IP address automatically via DHCP, there won’t be anything gained by using static addresses. Using Static IPs isn’t necessarily for the average user, but if you’re a geek who wants to know the address assigned to each machine, it can allow for faster troubleshooting.  To change your Ubuntu machine to a Static IP go to System \ Preferences \ Network Connections. In our example, we’re on a wired system so click on the Wired tab, then select Auto eth0 and click on Edit. Select the IPv4 settings tab, change Method to Manual, click the Add button. Then type in the Static IP Address, Subnet Mask, DNS Servers, and Default Gateway. Then click Apply when you’re finished. Make sure to hit Enter after typing in the Default Gateway otherwise it will revert back to 0.0.0.0 You’ll need to enter in your admin password before the changes go into affect. To verify the changes have been made successfully launch a Terminal session and type in ifconfig at the command prompt, or follow these directions. You also might want to ping the address from another machine to make sure everything is communicating. If you want to assign a Static IP to your Windows machines, check out our article on how to assign a Static IP on Windows systems (make sure to browse the comments as our readers have some good suggestions).  Whether you have a small office or home network set up with a server and several machines, using a Static IP on each device can help you manage them easily. Again, it isn’t for everyone as it really depends on how your network is setup and the way you use it. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAllow Remote Control To Your Desktop On UbuntuAssign Custom Shortcut Keys on Ubuntu LinuxKeyboard Ninja: 21 Keyboard Shortcut ArticlesChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • Impromptu-interface

    - by Sean Feldman
    While trying to solve a problem of removing conditional execution from my code, I wanted to take advantage of .NET 4.0 and it’s dynamic capabilities. Going with DynamicObject or ExpandoObject initially didn’t get me any success since those by default support properties and indexes, but not methods. Luckily, I have a reply for my post and learned about this great OSS library called impromptu-interface. It based on DLR capabilities in .NET 4.0 and I have to admit that it made my code extremely simple – no more if :)

    Read the article

  • Add Global Hotkeys to Windows Media Player

    - by DigitalGeekery
    Do you use Windows Media Player in the background while working in other applications? The WMP Keys plug-in for Media Player adds global keyboard shortcuts that allow you to control Media Player even when it isn’t in focus. Windows Media Player has a slew of keyboard shortcuts that work only when the media player is active, but these shortcuts stop working once WMP is no longer in focus or minimized. WMP Keys add the following default global hotkeys for Windows Media Player 10, 11, and 12. Ctrl+Alt+Home – Play / Pause Ctrl+Alt+Right – Next track Ctrl+Alt+Left – Previous track Ctrl+Alt+Up Arrow Key – Volume Up Ctrl+Alt+Down Arrow Key – Volume Down Ctrl+Alt+F – Fast Forward Ctrl+Alt+B – Fast Backward Ctrl+Alt+[1-5] – Rate 1-5 stars Note: Tapping Ctrl+Alt+F and Ctrl+Alt+B will skip ahead or back in 5 second intervals. Close out of Windows Media Player and then download and install WMP Keys (link below). After you’ve installed WMP Keys, you’ll need to enable it. Select Organize and then Options… In the Options window, select the Plug-ins tab, click Background in the Category window, then check the box for Wmpkeys Plugin. Click OK to save and exit. You can also enable the plug-in by selecting Tools > Plug-ins and clicking Wmpkeys Plugin. You to view and edit the global hotkeys in the WMPKeys settings window. Select Tools > Plug-in properties and click Wmpkeys Plugin. Below you can see all the default WMP Keys shortcuts.   To change any of the shortcuts, select the text box then press the new keyboard shortcut. Click OK when finished. WMP Keys is very simple little plug-in that makes using WMP while you’re multitasking just a little bit easier and more efficient.  Looking for more plugins for Windows Media Player? Check out our previous articles on adding new features with Media Player Plus, and displaying song lyrics with Lyrics Plugin. Download WMP Keys Similar Articles Productive Geek Tips Built-in Quick Launch Hotkeys in Windows VistaFixing When Windows Media Player Library Won’t Let You Add FilesKantaris is a Unique Media Player Based on VLCInstall and Use the VLC Media Player on Ubuntu LinuxAssign Keyboard Media Keys to Work in Winamp TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter

    Read the article

  • Ubuntu 11.10 - can't adjust brightness on my laptop

    - by Danny
    Using every method possible I'm unable to change my laptop brightness.. It's stuck on super max brightness.. Using the slider in the "screens" window int he control panel doesnt do anything, and using the fn keys doesn't do anything... Some info about my system: laptop is a MSI VR420 Running ubuntu 11.10 video card is an integrated intel card Used to work when I ran ubuntu 10.10 and earlier versions (not sure if it worked out of the box or if I inadvertly fixed it in previous versions while installing lots of other packages) Brightness slider on the "screen" window doesn't do anything, "dim when on battery -power" doesnt do anything When I use the fn+f4/f5 keys to adjust brightness there is a popup showing that its receving the input, but i can only go from between 0 brightness and max brightness.. (that is what the output is showing, the brightness does not change though) when attempting to change the brightness with fn+f4/f5 my dmesg log reports "ACPI: Failed to switch the brightness" Here are some outputs from some terminal commands, not sure if any of this is useful or not.. lspci - http://pastebin.com/EimZSGs3 "ls /sys/class/backlight/*/brightness" will output "/sys/class/backlight/intel_backlight/brightness" "cat /sys/class/backlight/intel_backlight/brightness" = 0 (when I use the fn+f4/f5) this will change betwene 0 and, but the actual brightness doesn't change) "cat /sys/class/backlight/intel_backlight/max_brightness" = 1 "lsmod | grep ^i915" = i915 505108 3 here is the list of things I've tried through searching google..... Edit /etc/default/grub?GRUB_CMDLINE_LINUX_DEFAULT: acpi_osi=Linux, acpi_backlight=vendor, nomodeset. (as well as several different combinations of these settings being on or off) Edit /etc/X11/xorg.conf (file doesn't exist on my system) Edit /proc/acpi/video/VGA/LCD/brightness (file doesn't exist) sudo setpci -s 00:02.0 F4.B=XX (does nothing) xbacklight -set XX (does nothing) I've tried about everything with no luck... The only thing i haven't tried is adding the ppa that someone suggested here: Unable to change brightness in a Lenovo laptop .... However according to the notes on the ppa.. all of the changes that are in the ppa are now actually apart of 11.10 and the ppa is only for people with 11.04.. Does anyone have any ideas for me? edit:by setting acpi=off in my /etc/default/grub file I was about to get my fn+f4/f5 keys to work, also "dim when display to save power" now makes my laptop dim when on battery power.. the dimness slider however still doesn't do anything.. also xbacklight doesn't do anything stil or any of the other methods... The thing i don't get is why setting acpi=off makes my fn+f4/f5 keys work? Isn't acpi supposed to be enables the backlight to be dimmed? does anyone know what ubuntu is deciding to do behind the scenes when acpi=off? Does anyone know what/if any features I might be losing with it off?

    Read the article

  • Gvim GLib-GObject-WARNING in ubuntu 13.10

    - by naveen.panwar
    I upgraded from ubuntu 13.04 to ubuntu 13.10 this afternoon. And when I try starting vim form the terminal after the upgrade, I get these warnings (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::sm-connect after class was initialised (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::show-crash-dialog after class was initialised (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::display after class was initialised (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::default-icon after class was initialised` How can I fix these and what exectly are these warnings about

    Read the article

  • IIS MIME type for XML content

    - by Rodolfo
    recently a third party plugin I'm using to display online magazines stopped working on mobile devices. According to their help page, this happens for people serving with IIS. Their solution is to set the MIME type .xml to "application/xml". It's by default set to "text/xml". Changing it does work, but would that have unintended side effects or is it actually the correct way and IIS just set it wrong?

    Read the article

  • Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04

    - by Asian Angel
    Is your computer or virtualization software unable to display the new 3D version of the Unity Interface in Ubuntu? Now you can access and enjoy the 2D version with just a little PPA magic added to your system! To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and click on the PPA listing for Unity 2D on the left (highlighted with red in the image). Scroll down until you find the listing for “Unity interface for non-accelerated graphics cards – unity-2d” and click Install. Once that is done you are ready to go to System, Administration, and then select Login Screen in your Ubuntu Menu. Unlock the screen and select Unity 2D as the default session from the drop-down list as shown here. Log out and then back in to start enjoying that Unity 2D goodness! Here is how things will look when you click on the Ubuntu Menu Icon. Select the category that you would like to start with (such as Web) and get ready to have fun. This definitely looks (and works) awesome! Enjoy your new Unity 2D Interface! Unity 2D Packaging PPA [Launchpad] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >