Search Results

Search found 20970 results on 839 pages for 'properties settings'.

Page 110/839 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • SQL SERVER – Difference between DATABASEPROPERTY and DATABASEPROPERTYEX

    - by pinaldave
    Earlier I asked a simple question on Facebook regarding difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server. You can view the original conversation there over here. The conversion immediately became very interesting and lots of healthy discussion happened on facebook page. The best part of having conversation on facebook page is the comfort it provides and leaner commenting interface. Question Question from SQLAuthority.com: What is the difference between DATABASEPROPERTY and DATABASEPROPERTYEX in SQL Server? Answer Answer from Rakesh Kumar: DATABASEPROPERTY is supported for backward compatibility but does not provide information about the properties added in this release. Also, many properties supported by DATABASEPROPERTY have been replaced by new properties in DATABASEPROPERTYEX.- source (MSDN). Answer from Alphonso Jones: The only real difference I can see is one, the number of properties contained and the other is that EX returns a sql_variant while DATABASEPROPERTY returns only int. Answer from Ambati Venkatasiva: Both are system meta data functions. DATABASEPROPERTYEX Returns the current setting of the specified database option. DATABASEPROPERTYEX returns the sq-varient value and DATABASEPROPERTY returns integer value. Answer from Rama Sankar Molleti:  Here is the best example about databasepropertyex SELECT DATABASEPROPERTYEX('dbname', 'Collation') Result SQL_1xCompat_CP850_CI_AS Whereas with databaseproperty it retuns nothing as the return type for this is integer. Sql_variant datatype stores values of various sql server supported datatypes except text, ntext, image and timestamp. Answer from Alok Seth:  SELECT DATABASEPROPERTYEX('AdventureWorks', 'Status') DatabaseStatus_DATABASEPROPERTYEX GO --Result - ONLINE SELECT DATABASEPROPERTY('AdventureWorks', 'Status') DatabaseStatus_DATABASEPROPERTY GO --Result - NULL Summary Use DATABASEPROPERTYEX as it is the only function supported in future version as well it returns status of various database properties which does not exists with DATABASEPROPERTY. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Convert .3GP and .3G2 Files to AVI / MPEG for Free

    - by DigitalGeekery
    3GP and .3G2 are common video capture formats used on many mobile phones, but they may not be supported by your favorite media player. Today we’ll show you a quick and easy way to convert those files to AVI or MPG format with the free Windows application, Pazera Free 3GP to AVI Converter. Download the Pazera Free 3GP to AVI Converter. You’ll have to unzip the download folder, but there is no need to install the application. Just double-click the 3gptoavi.exe file to run the application. To add your 3GP or 3G2 files to the queue to be converted, click on the Add files  button at the top left. Browse for your file, and click Open.   Your video will be added to the Queue. You can add multiple files to the queue and convert them all at one time.   Most users will find it preferable to use one of the pre-configured profiles for their conversion settings. To load a profile, choose one from the Profile drop down list and then click the Load button. You will see the profile update the settings in the panels at the bottom of the application. We tested Pazera Free 3GP to AVI Converter with 3GP files recorded on a Motorola Droid, and found the AVI H.264 Very High Q. profile to return the best results for AVI output, and the MPG – DVD NTSC: MPEG-2 the best results for MPG output. Other profiles produced smaller file sizes, but at a cost of reduced quality video output.   More advanced users may tweak video and audio settings to their liking in the lower panels. Click on the AVI button under Output file format / Video settings to adjust settings AVI… Or the MPG button to adjust the settings for MPG output. By default, the converted file will be output to the same location as the input directory. You can change it by clicking the text box input radio button and browsing for a different folder. When you’ve chosen your settings, click Convert to begin the conversion process.   A conversion output box will open and display the progress. When finished, click Close. Now you’re ready to enjoy your video in your favorite media player. Pazera Free 3GP to AVI Converter isn’t the most robust media conversion tool, but it does what it is intended to do. It handles the task of 3GP to AVI / MPG conversion very well. It’s easy enough for the beginner to manage without much trouble, but also has enough options to please more experienced users. Download Pazera Free 3GP to AVI Converter Similar Articles Productive Geek Tips How To Convert Video Files to MP3 with VLCEasily Change Audio File Formats with XRECODEConvert PDF Files to Word Documents and Other FormatsConvert Video and Remove Commercials in Windows 7 Media Center with MCEBuddy 1.1Compress Large Video Files with DivX / Xvid and AutoGK TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • A Look at the GridView&apos;s New Sorting Styles in ASP.NET 4.0

    Like every Web control in the ASP.NET toolbox, the GridView includes a variety of style-related properties, including <code>CssClass</code>, <code>Font</code>, <code>ForeColor</code>, <code>BackColor</code>, <code>Width</code>, <code>Height</code>, and so on. The GridView also includes style properties that apply to certain classes of rows in the grid, such as <code>RowStyle</code>, <code>AlternatingRowStyle</code>, <code>HeaderStyle</code>, and <code>PagerStyle</code>. Each of these meta-style properties offer the standard style properties (<code>CssClass</code>, <code>Font</code>, etc.) as subproperties.In ASP.NET 4.0, Microsoft added four new

    Read the article

  • Changing CSS with jQuery syntax in Silverlight using jLight

    - by Timmy Kokke
    Lately I’ve ran into situations where I had to change elements or had to request a value in the DOM from Silverlight. jLight, which was introduced in an earlier article, can help with that. jQuery offers great ways to change CSS during runtime. Silverlight can access the DOM, but it isn’t as easy as jQuery. All examples shown in this article can be looked at in this online demo. The code can be downloaded here.   Part 1: The easy stuff Selecting and changing properties is pretty straight forward. Setting the text color in all <B> </B> elements can be done using the following code:   jQuery.Select("b").Css("color", "red");   The Css() method is an extension method on jQueryObject which is return by the jQuery.Select() method. The Css() method takes to parameters. The first is the Css style property. All properties used in Css can be entered in this string. The second parameter is the value you want to give the property. In this case the property is “color” and it is changed to “red”. To specify which element you want to select you can add a :selector parameter to the Select() method as shown in the next example.   jQuery.Select("b:first").Css("font-family", "sans-serif");   The “:first” pseudo-class selector selects only the first element. This example changes the “font-family” property of the first <B></B> element to “sans-serif”. To make use of intellisense in Visual Studio I’ve added a extension methods to help with the pseudo-classes. In the example below the “font-weight” of every “Even” <LI></LI> is set to “bold”.   jQuery.Select("li".Even()).Css("font-weight", "bold");   Because the Css() extension method returns a jQueryObject it is possible to chain calls to Css(). The following example show setting the “color”, “background-color” and the “font-size” of all headers in one go.   jQuery.Select(":header").Css("color", "#12FF70") .Css("background-color", "yellow") .Css("font-size", "25px");   Part 2: More complex stuff In only a few cases you need to change only one style property. More often you want to change an entire set op style properties all in one go.  You could chain a lot of Css() methods together. A better way is to add a class to a stylesheet and define all properties in there. With the AddClass() method you can set a style class to a set of elements. This example shows how to add the “demostyle” class to all <B></B> in the document.   jQuery.Select("b").AddClass("demostyle");   Removing the class works in the same way:   jQuery.Select("b").RemoveClass("demostyle");   jLight is build for interacting with to the DOM from Silverlight using jQuery. A jQueryObjectCss object can be used to define different sets of style properties in Silverlight. The over 60 most common Css style properties are defined in the jQueryObjectCss class. A string indexer can be used to access all style properties ( CssObject1[“background-color”] equals CssObject1.BackgroundColor). In the code below, two jQueryObjectCss objects are defined and instantiated.   private jQueryObjectCss CssObject1; private jQueryObjectCss CssObject2;   public Demo2() { CssObject1 = new jQueryObjectCss { BackgroundColor = "Lime", Color="Black", FontSize = "12pt", FontFamily = "sans-serif", FontWeight = "bold", MarginLeft = 150, LineHeight = "28px", Border = "Solid 1px #880000" }; CssObject2 = new jQueryObjectCss { FontStyle = "Italic", FontSize = "48", Color = "#225522" }; InitializeComponent(); }   Now instead of chaining to set all different properties you can just pass one of the jQueryObjectCss objects to the Css() method. In this case all <LI></LI> elements are set to match this object.   jQuery.Select("li").Css(CssObject1); When using the jQueryObjectCss objects chaining is still possible. In the following example all headers are given a blue backgroundcolor and the last is set to match CssObject2.   jQuery.Select(":header").Css(new jQueryObjectCss{BackgroundColor = "Blue"}) .Eq(-1).Css(CssObject2);   Part 3: The fun stuff Having Silverlight call JavaScript and than having JavaScript to call Silverlight requires a lot of plumbing code. Everything has to be registered and strings are passed back and forth to execute the JavaScript. jLight makes this kind of stuff so easy, it becomes fun to use. In a lot of situations jQuery can call a function to decide what to do, setting a style class based on complex expressions for example. jLight can do the same, but the callback methods are defined in Silverlight. This example calls the function() method for each <LI></LI> element. The callback method has to take a jQueryObject, an integer and a string as parameters. In this case jLight differs a bit from the actual jQuery implementation. jQuery uses only the index and the className parameters. A jQueryObject is added to make it simpler to access the attributes and properties of the element. If the text of the listitem starts with a ‘D’ or an ‘M’ the class is set. Otherwise null is returned and nothing happens.   private void button1_Click(object sender, RoutedEventArgs e) { jQuery.Select("li").AddClass(function); }   private string function(jQueryObject obj, int index, string className) { if (obj.Text[0] == 'D' || obj.Text[0] == 'M') return "demostyle"; return null; }   The last thing I would like to demonstrate uses even more Silverlight and less jLight, but demonstrates the power of the combination. Animating a style property using a Storyboard with easing functions. First a dependency property is defined. In this case it is a double named Intensity. By handling the changed event the color is set using jQuery.   public double Intensity { get { return (double)GetValue(IntensityProperty); } set { SetValue(IntensityProperty, value); } }   public static readonly DependencyProperty IntensityProperty = DependencyProperty.Register("Intensity", typeof(double), typeof(Demo3), new PropertyMetadata(0.0, IntensityChanged));   private static void IntensityChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var i = (byte)(double)e.NewValue; jQuery.Select("span").Css("color", string.Format("#{0:X2}{0:X2}{0:X2}", i)); }   An animation has to be created. This code defines a Storyboard with one keyframe that uses a bounce ease as an easing function. The animation is set to target the Intensity dependency property defined earlier.   private Storyboard CreateAnimation(double value) { Storyboard storyboard = new Storyboard(); var da = new DoubleAnimationUsingKeyFrames(); var d = new EasingDoubleKeyFrame { EasingFunction = new BounceEase(), KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromSeconds(1.0)), Value = value }; da.KeyFrames.Add(d); Storyboard.SetTarget(da, this); Storyboard.SetTargetProperty(da, new PropertyPath(Demo3.IntensityProperty)); storyboard.Children.Add(da); return storyboard; }   Initially the Intensity is set to 128 which results in a gray color. When one of the buttons is pressed, a new animation is created an played. One to animate to black, and one to animate to white.   public Demo3() { InitializeComponent(); Intensity = 128; }   private void button2_Click(object sender, RoutedEventArgs e) { CreateAnimation(255).Begin(); }   private void button3_Click(object sender, RoutedEventArgs e) { CreateAnimation(0).Begin(); }   Conclusion As you can see jLight can make the life of a Silverlight developer a lot easier when accessing the DOM. Almost all jQuery functions that are defined in jLight use the same constructions as described above. I’ve tried to stay as close as possible to the real jQuery. Having JavaScript perform callbacks to Silverlight using jLight will be described in more detail in a future tutorial about AJAX or eventing.

    Read the article

  • Some problems after running ubuntu tweak application

    - by sachin
    I am a first time user of ubuntu 12.04 and must say that i am enjoying it. Somehow my skype on ubuntu is screwed up. The message that i am getting says (Disk IO error). Did any one face the problem before. Also i downloaded the ubuntu tweak application. After that application has clean some file, the application it self is not starting now.. I am quiet new to linux much and fear that if the tweak application has cleaned some files which were necessary. Is there anything i can fix these problems or short of auto-recovery, i have got some more leads: here seems to be the error but not sure how can i fix this. buntu-tweak:2887): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. Traceback (most recent call last): File "/usr/bin/ubuntu-tweak", line 122, in from ubuntutweak.main import UbuntuTweakWindow File "/usr/lib/python2.7/dist-packages/ubuntutweak/main.py", line 40, in from ubuntutweak.preferences import PreferencesDialog File "/usr/lib/python2.7/dist-packages/ubuntutweak/preferences.py", line 32, in from ubuntutweak.factory import WidgetFactory File "/usr/lib/python2.7/dist-packages/ubuntutweak/factory.py", line 24, in from ubuntutweak.gui.widgets import * File "/usr/lib/python2.7/dist-packages/ubuntutweak/gui/widgets.py", line 10, in from ubuntutweak.settings.compizsettings import CompizSetting File "/usr/lib/python2.7/dist-packages/ubuntutweak/settings/compizsettings.py", line 3, in import ccm File "/usr/lib/python2.7/dist-packages/ubuntutweak/settings/ccm/init.py", line 1, in from Conflicts import * File "/usr/lib/python2.7/dist-packages/ubuntutweak/settings/ccm/Conflicts.py", line 25, in from Constants import * File "/usr/lib/python2.7/dist-packages/ubuntutweak/settings/ccm/Constants.py", line 77, in locale.setlocale(locale.LC_ALL, "") File "/usr/lib/python2.7/locale.py", line 539, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting

    Read the article

  • java UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid [closed]

    - by Philippe
    I want use Unitils for testing my JAVA program, i have following error message "UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid" My table is into an DDL file, and table is not created Could you says me if dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema properties is still active in unitils 3.3 ? My EMPLOYEE table is not created and DBMAINTAIN_SCRIPTS not created too Where is my mistake ? My DDL file SET REFERENTIAL_INTEGRITY FALSE; SET DATABASE COLLATION "French"; SET SCHEMA PUBLIC; CREATE TABLE DBMAINTAIN_SCRIPTS (FILE_NAME VARCHAR2(150), FILE_LAST_MODIFIED_AT INTEGER, CHECKSUM VARCHAR2(50), EXECUTED_AT VARCHAR2(20), SUCCEEDED INTEGER); CREATE TABLE EMPLOYEES(ID IDENTITY NOT NULL,NAME VARCHAR(20),TITLE VARCHAR(20),SALARY DOUBLE,NI INTEGER NOT NULL) My unitils properties file # comments documenting these unitils configuration properties removed for # brevity. look for commenting in unitils-default.properties in the root of the # unitils jar if needed. unitils.modules=database,dbunit,easymock,inject unitils.module.hibernate.enabled=false unitils.module.spring.enabled=false # these placeholders are set in avaje.properties #gere la configuration DBUNIT database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:unitils-example database.schemaNames=PUBLIC database.userName=SA database.password= database.dialect=hsqldb # unitils will construct the test database using the ddl file found in this # directory dbMaintainer.fileScriptSource.scripts.location=src/main/resources updateDataBaseSchema.enabled=true sequenceUpdater.sequencevalue.lowestacceptable=100 dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema #dbMaintainer.autoCreateExecutedScriptsTable property to true

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • C#/.NET Little Wonders &ndash; Cross Calling Constructors

    - by James Michael Hare
    Just a small post today, it’s the final iteration before our release and things are crazy here!  This is another little tidbit that I love using, and it should be fairly common knowledge, yet I’ve noticed many times that less experienced developers tend to have redundant constructor code when they overload their constructors. The Problem – repetitive code is less maintainable Let’s say you were designing a messaging system, and so you want to create a class to represent the properties for a Receiver, so perhaps you design a ReceiverProperties class to represent this collection of properties. Perhaps, you decide to make ReceiverProperties immutable, and so you have several constructors that you can use for alternative construction: 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: { 13: ReceiverType = receiverType; 14: Source = source; 15: IsDurable = isDurable; 16: IsBuffered = true; 17: } 18:  19: // Constructs a set of receiver properties with buffering on and durability off. 20: public ReceiverProperties(ReceiverType receiverType, string source) 21: { 22: ReceiverType = receiverType; 23: Source = source; 24: IsDurable = false; 25: IsBuffered = true; 26: } Note: keep in mind this is just a simple example for illustration, and in same cases default parameters can also help clean this up, but they have issues of their own. While strictly speaking, there is nothing wrong with this code, logically, it suffers from maintainability flaws.  Consider what happens if you add a new property to the class?  You have to remember to guarantee that it is set appropriately in every constructor call. This can cause subtle bugs and becomes even uglier when the constructors do more complex logic, error handling, or there are numerous potential overloads (especially if you can’t easily see them all on one screen’s height). The Solution – cross-calling constructors I’d wager nearly everyone knows how to call your base class’s constructor, but you can also cross-call to one of the constructors in the same class by using the this keyword in the same way you use base to call a base constructor. 1: // Constructs a set of receiver properties. 2: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable, bool isBuffered) 3: { 4: ReceiverType = receiverType; 5: Source = source; 6: IsDurable = isDurable; 7: IsBuffered = isBuffered; 8: } 9: 10: // Constructs a set of receiver properties with buffering on by default. 11: public ReceiverProperties(ReceiverType receiverType, string source, bool isDurable) 12: : this(receiverType, source, isDurable, true) 13: { 14: } 15:  16: // Constructs a set of receiver properties with buffering on and durability off. 17: public ReceiverProperties(ReceiverType receiverType, string source) 18: : this(receiverType, source, false, true) 19: { 20: } Notice, there is much less code.  In addition, the code you have has no repetitive logic.  You can define the main constructor that takes all arguments, and the remaining constructors with defaults simply cross-call the main constructor, passing in the defaults. Yes, in some cases default parameters can ease some of this for you, but default parameters only work for compile-time constants (null, string and number literals).  For example, if you were creating a TradingDataAdapter that relied on an implementation of ITradingDao which is the data access object to retreive records from the database, you might want two constructors: one that takes an ITradingDao reference, and a default constructor which constructs a specific ITradingDao for ease of use: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: { 10: _tradingDao = new SqlTradingDao(); 11:  12: // same constructor logic as above 13: }   As you can see, this isn’t something we can solve with a default parameter, but we could with cross-calling constructors: 1: public TradingDataAdapter(ITradingDao dao) 2: { 3: _tradingDao = dao; 4:  5: // other constructor logic 6: } 7:  8: public TradingDataAdapter() 9: : this(new SqlTradingDao()) 10: { 11: }   So in cases like this where you have constructors with non compiler-time constant defaults, default parameters can’t help you and cross-calling constructors is one of your best options. Summary When you have just one constructor doing the job of initializing the class, you can consolidate all your logic and error-handling in one place, thus ensuring that your behavior will be consistent across the constructor calls. This makes the code more maintainable and even easier to read.  There will be some cases where cross-calling constructors may be sub-optimal or not possible (if, for example, the overloaded constructors take completely different types and are not just “defaulting” behaviors). You can also use default parameters, of course, but default parameter behavior in a class hierarchy can be problematic (default values are not inherited and in fact can differ) so sometimes multiple constructors are actually preferable. Regardless of why you may need to have multiple constructors, consider cross-calling where you can to reduce redundant logic and clean up the code.   Technorati Tags: C#,.NET,Little Wonders

    Read the article

  • Ubutu 14.04 triple screen, third screen black and X cursor

    - by Horse
    I am having some issues getting my third screen working properly. I had triple screens working fine on 12.04, using 2 nvidia cards. Did a fresh install of 14.04 and having no end of problems getting it working. It either will just be disabled, or the screen is black with the cursor as an X. I can only enable it from the nvidia server settings tool. The Ubuntu native display settings won't even show the 3rd screen. I tried copying the xorg.conf from my old install, which upon restarting X worked fine on the login screen, but then it just sat there after I logged in and didn’t do anything (mouse was still working). I am using gnome-session-fallback instead of unity if that makes any difference. Still having these issues if I try unity though. How do I get my 3rd screen working and displaying a desktop? Here is my current xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildd@roseapple) Mon Feb 3 15:07:22 UTC 2014 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 0.0 - 0.0 VertRefresh 0.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" # Removed Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" # Removed Option "SLI" "Off" # Removed Option "BaseMosaic" "off" # Removed Option "metamodes" "GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-2: nvidia-auto-select +0+0, GPU-109d4eb8-b40b-87d7-3fd6-95830d1d5215.DVI-I-3: nvidia-auto-select +1280+0, GPU-82e96214-175e-5e6a-218c-5bdbc948daf2.DVI-I-1: nvidia-auto-select +3200+0" # Removed Option "SLI" "off" # Removed Option "BaseMosaic" "on" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0, DVI-I-3: nvidia-auto-select +1280+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" # Removed Option "metamodes" "nvidia-auto-select +0+0" # Removed Option "metamodes" "DVI-I-3: nvidia-auto-select +0+0" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Here is my old 'working in 12.04' xorg.conf # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 310.19 ([email protected]) Thu Nov 8 02:08:55 PST 2012 Section "ServerLayout" # Removed Option "Xinerama" "0" Identifier "Layout0" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen2" Screen 2 "Screen2" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" # HorizSync source: unknown, VertRefresh source: unknown Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Apple Cinema HD" HorizSync 74.0 - 74.6 VertRefresh 59.9 - 60.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 520" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0, DFP-2: nvidia-auto-select +1280+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1280x1024_75 +0+0; DFP-0: 1152x864 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0 {viewportout=1280x720+0+152}" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Essbase 11.1.2 - AgtSvrConnections Essbase Configuration Setting

    - by Ann Donahue
    AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads connects to the ESSBASE process then the AgtSvrConnections will connect the ESSBASE process to the ESSSVR application process which then the ServerThreads are used for end user activities. In Essbase 11.1.2, the default value of the AgtSvrConnections setting was changed to 5. In previous Essbase releases, the AgtSvrConnections setting default value is 1. It is recommended that tuning the AgtSvrConnections settings be done incrementally by 1 or 2 maximum and based on the number of concurrent Set Active/Clear Active calls. In the Essbase DBA Guide and Technical Reference, the maximum setting recommended is to not exceed what is set for AgentThreads, however, we have found that most customers do not need to exceed a setting of 10. In general, it is ok to set AgtSvrConnections close to the AgentThreads setting, however, there have been customers that needed an AgentThread setting greater than 10 and we have found that the AgtSvrConnections setting higher than 5-10 could have a negative impact on Essbase due to too many TCP ports used unnecessarily. As with all Essbase.cfg settings, it is best to set values to what is needed based on process load and not arbitrarily set to high values. In order to monitor and tune the AgtSvrConnections setting, monitor the application log for logins and Set Active/Clear Active messages. If there are a lot of logins and Set Active/Clear Active messages happening in a short period of time making it appear that the login is taking longer, incrementally increase the AgtSvrConnections setting by 1 or 2, which can then help with login speed. The login performance tolerance is different from one customer environment to another since there are other factors that can impact this performance i.e. network latency. What is happening in Essbase when a user logs in: ESSBASE issues a Set Active to the ESSSVR process. Each application has its own ESSSVR process. Set Active then calls MultipleAsyncLogout and waits on the pipe connection. MultipleAsyncLogout goes back to ESSBASE. ESSBASE then needs to send the logout back to the ESSSVR process. When the AgtSvrConnections setting needs to be increased from the default of 5, it is because Essbase cannot find a connection since the previous connections are used by ESSBASE-ESSSVR. In this example, we may want to increase AgtSvrConnections from 5 to 7 to improve the login performance. Again, it is best to set Essbase settings to what is needed based on process load and not arbitrarily set to high values. In general, stress or performance testing environments using automated tools may need higher than normal settings. This is because automated processes run at high speeds for logging in and logging out. Typically, in a real life production environment, the settings are much closer to default values.

    Read the article

  • Problem displaying tiles using tiled map loader with SFML

    - by user1905192
    I've been searching fruitlessly for what I did wrong for the past couple of days and I was wondering if anyone here could help me. My program loads my tile map, but then crashes with an assertion error. The program breaks at this line: spacing = atoi(tilesetElement-Attribute("spacing")); Here's my main game.cpp file. #include "stdafx.h" #include "Game.h" #include "Ball.h" #include "level.h" using namespace std; Game::Game() { gameState=NotStarted; ball.setPosition(500,500); level.LoadFromFile("meow.tmx"); } void Game::Start() { if (gameState==NotStarted) { window.create(sf::VideoMode(1024,768,320),"game"); view.reset(sf::FloatRect(0,0,1000,1000));//ball drawn at 500,500 level.SetDrawingBounds(sf::FloatRect(view.getCenter().x-view.getSize().x/2,view.getCenter().y-view.getSize().y/2,view.getSize().x, view.getSize().y)); window.setView(view); gameState=Playing; } while(gameState!=Exiting) { GameLoop(); } window.close(); } void Game::GameLoop() { sf::Event CurrentEvent; window.pollEvent(CurrentEvent); switch(gameState) { case Playing: { window.clear(sf::Color::White); window.setView(view); if (CurrentEvent.type==sf::Event::Closed) { gameState=Exiting; } if ( !ball.IsFalling() &&!ball.IsJumping() &&sf::Keyboard::isKeyPressed(sf::Keyboard::Space)) { ball.setJState(); } ball.Update(view); level.Draw(window); ball.Draw(window); window.display(); break; } } } And here's the file where the error happens: /********************************************************************* Quinn Schwab 16/08/2010 SFML Tiled Map Loader The zlib license has been used to make this software fully compatible with SFML. See http://www.sfml-dev.org/license.php This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. *********************************************************************/ #include "level.h" #include <iostream> #include "tinyxml.h" #include <fstream> int Object::GetPropertyInt(std::string name) { int i; i = atoi(properties[name].c_str()); return i; } float Object::GetPropertyFloat(std::string name) { float f; f = strtod(properties[name].c_str(), NULL); return f; } std::string Object::GetPropertyString(std::string name) { return properties[name]; } Level::Level() { //ctor } Level::~Level() { //dtor } using namespace std; bool Level::LoadFromFile(std::string filename) { TiXmlDocument levelFile(filename.c_str()); if (!levelFile.LoadFile()) { std::cout << "Loading level \"" << filename << "\" failed." << std::endl; return false; } //Map element. This is the root element for the whole file. TiXmlElement *map; map = levelFile.FirstChildElement("map"); //Set up misc map properties. width = atoi(map->Attribute("width")); height = atoi(map->Attribute("height")); tileWidth = atoi(map->Attribute("tilewidth")); tileHeight = atoi(map->Attribute("tileheight")); //Tileset stuff TiXmlElement *tilesetElement; tilesetElement = map->FirstChildElement("tileset"); firstTileID = atoi(tilesetElement->Attribute("firstgid")); spacing = atoi(tilesetElement->Attribute("spacing")); margin = atoi(tilesetElement->Attribute("margin")); //Tileset image TiXmlElement *image; image = tilesetElement->FirstChildElement("image"); std::string imagepath = image->Attribute("source"); if (!tilesetImage.loadFromFile(imagepath))//Load the tileset image { std::cout << "Failed to load tile sheet." << std::endl; return false; } tilesetImage.createMaskFromColor(sf::Color(255, 0, 255)); tilesetTexture.loadFromImage(tilesetImage); tilesetTexture.setSmooth(false); //Columns and rows (of tileset image) int columns = tilesetTexture.getSize().x / tileWidth; int rows = tilesetTexture.getSize().y / tileHeight; std::vector <sf::Rect<int> > subRects;//container of subrects (to divide the tilesheet image up) //tiles/subrects are counted from 0, left to right, top to bottom for (int y = 0; y < rows; y++) { for (int x = 0; x < columns; x++) { sf::Rect <int> rect; rect.top = y * tileHeight; rect.height = y * tileHeight + tileHeight; rect.left = x * tileWidth; rect.width = x * tileWidth + tileWidth; subRects.push_back(rect); } } //Layers TiXmlElement *layerElement; layerElement = map->FirstChildElement("layer"); while (layerElement) { Layer layer; if (layerElement->Attribute("opacity") != NULL)//check if opacity attribute exists { float opacity = strtod(layerElement->Attribute("opacity"), NULL);//convert the (string) opacity element to float layer.opacity = 255 * opacity; } else { layer.opacity = 255;//if the attribute doesnt exist, default to full opacity } //Tiles TiXmlElement *layerDataElement; layerDataElement = layerElement->FirstChildElement("data"); if (layerDataElement == NULL) { std::cout << "Bad map. No layer information found." << std::endl; } TiXmlElement *tileElement; tileElement = layerDataElement->FirstChildElement("tile"); if (tileElement == NULL) { std::cout << "Bad map. No tile information found." << std::endl; return false; } int x = 0; int y = 0; while (tileElement) { int tileGID = atoi(tileElement->Attribute("gid")); int subRectToUse = tileGID - firstTileID;//Work out the subrect ID to 'chop up' the tilesheet image. if (subRectToUse >= 0)//we only need to (and only can) create a sprite/tile if there is one to display { sf::Sprite sprite;//sprite for the tile sprite.setTexture(tilesetTexture); sprite.setTextureRect(subRects[subRectToUse]); sprite.setPosition(x * tileWidth, y * tileHeight); sprite.setColor(sf::Color(255, 255, 255, layer.opacity));//Set opacity of the tile. //add tile to layer layer.tiles.push_back(sprite); } tileElement = tileElement->NextSiblingElement("tile"); //increment x, y x++; if (x >= width)//if x has "hit" the end (right) of the map, reset it to the start (left) { x = 0; y++; if (y >= height) { y = 0; } } } layers.push_back(layer); layerElement = layerElement->NextSiblingElement("layer"); } //Objects TiXmlElement *objectGroupElement; if (map->FirstChildElement("objectgroup") != NULL)//Check that there is atleast one object layer { objectGroupElement = map->FirstChildElement("objectgroup"); while (objectGroupElement)//loop through object layers { TiXmlElement *objectElement; objectElement = objectGroupElement->FirstChildElement("object"); while (objectElement)//loop through objects { std::string objectType; if (objectElement->Attribute("type") != NULL) { objectType = objectElement->Attribute("type"); } std::string objectName; if (objectElement->Attribute("name") != NULL) { objectName = objectElement->Attribute("name"); } int x = atoi(objectElement->Attribute("x")); int y = atoi(objectElement->Attribute("y")); int width = atoi(objectElement->Attribute("width")); int height = atoi(objectElement->Attribute("height")); Object object; object.name = objectName; object.type = objectType; sf::Rect <int> objectRect; objectRect.top = y; objectRect.left = x; objectRect.height = y + height; objectRect.width = x + width; if (objectType == "solid") { solidObjects.push_back(objectRect); } object.rect = objectRect; TiXmlElement *properties; properties = objectElement->FirstChildElement("properties"); if (properties != NULL) { TiXmlElement *prop; prop = properties->FirstChildElement("property"); if (prop != NULL) { while(prop) { std::string propertyName = prop->Attribute("name"); std::string propertyValue = prop->Attribute("value"); object.properties[propertyName] = propertyValue; prop = prop->NextSiblingElement("property"); } } } objects.push_back(object); objectElement = objectElement->NextSiblingElement("object"); } objectGroupElement = objectGroupElement->NextSiblingElement("objectgroup"); } } else { std::cout << "No object layers found..." << std::endl; } return true; } Object Level::GetObject(std::string name) { for (int i = 0; i < objects.size(); i++) { if (objects[i].name == name) { return objects[i]; } } } void Level::SetDrawingBounds(sf::Rect<float> bounds) { drawingBounds = bounds; cout<<tileHeight; //Adjust the rect so that tiles are drawn just off screen, so you don't see them disappearing. drawingBounds.top -= tileHeight; drawingBounds.left -= tileWidth; drawingBounds.width += tileWidth; drawingBounds.height += tileHeight; } void Level::Draw(sf::RenderWindow &window) { for (int layer = 0; layer < layers.size(); layer++) { for (int tile = 0; tile < layers[layer].tiles.size(); tile++) { if (drawingBounds.contains(layers[layer].tiles[tile].getPosition().x, layers[layer].tiles[tile].getPosition().y)) { window.draw(layers[layer].tiles[tile]); } } } } I really hope that one of you can help me and I'm sorry if I've made any formatting issues. Thanks!

    Read the article

  • How to edit semi-plaintext file and maintaining character structure?

    - by Raul
    I am using a software (Groupmail from Infacta) that uses exact / absolute %PATHS% for saving some settings in specific semi-plaintext file. This is a really bad idea because you can't move to USER folder, or like my case it does not work after migrating to a new computer with different language. For example: C:\Documents and Settings\USER\Local Settings\Application Data\Infacta is different than C:\Documents and Settings\USER\Configuración local\Datos de programa\Infacta Obviously, the software does not work well. I tried to solve this using Find/Replace the new PATH with Notepad++. While the Groupmail software loads well and shows settings correctly, the software fails when trying to save data on that file. I guess this is because length or number of replaced characters is different and also it corrupt the file. Please could you help me to edit this file maintaining file integrity / structure?

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • If ssh connection fails using SOCKS then what? Automate switch to no proxy?

    - by Benjamin Jones
    Right now I am using plink in a batch routine that reconnects to my SSH server if I loose connection. I use my plink connection & socks proxy (firefox) to forward all my browser traffic. Works great EXCEPT for one thing! If I can't get to my ssh server for some ODD reason I have to go to options in firefox and revert back my settings to NO Proxy. It can be done, but its annoying! So how would I keep my SOCKS Proxy connection in firefox, but if I cant connect to my SSH Server, how can I automatically switch to the autodetect proxy/no proxy settings in firefox? I would think that I could use the Firefox command line arguments and a batch routine to do so, but I do not believe this is possible. I do see via this link where the proxy settings are stored, but does that mean I have to change the proxy settings depending on my senario above within the .js file? http://stackoverflow.com/questions/843340/firefox-proxy-settings-via-command-line

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • What is the most performant CSS property for transitioning an element?

    - by Ian Kuca
    I'm wondering whether there is a performance difference between using different CSS properties to translate an element. Some properties fit different situations differently. You can translate an element with following properties: transform, top/left/right/bottom and margin-top/left/right/bottom In the case where you do not utilize the transition CSS property for the translation but use some form of a timer (setTimeout, requestAnimationFrame or setImmediate) or raw events, which is the most performant–which is going to make for higher FPS rates?

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • How to configure JAAS on JBoss?

    - by AntonioP
    Hey Im having a problem with "Failed to load users/passwords/role files: java.io.IOException: No properties file: users.properties or defaults: defaultUsers.properties found" error from jboss. No matter what I change in conf/login-config.xml always that same error. Turning on TRACE for org.jboss.security shows that it does Security domain: myapp followed by findResource: null and the above error. Ive tryed adding a users.properties to my .war WEB-INF/classes/users.properties to no avail. Why is jBoss doing like this? What is this JAAS and why does it need it? What does it require, where to put what files, if its possible Ill remove all of this org.jboss.security.auth.spi.UsersRolesLoginModule completly, just let me use my app. Thanks

    Read the article

  • The Best Articles for Backing Up and Syncing Your Data

    - by Lori Kaufman
    World Backup Day is March 31st and we decided to provide you with some useful information to make backing up your data easier. We’ve published articles about backing up various types of data and settings both offline and online. There’s all kinds of settings on your computer to backup in addition to your personal data, such as Wi-Fi passwords, drivers, and settings for programs like web browsers, Office, and Windows Live Writer. There are also many tools available to help you keep your data and settings backed up. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Make your TSQL easier to read during a presentation

    - by fatherjack
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it's not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a...(read more)

    Read the article

  • php.ini modifications has no effect on joomla

    - by user488736
    I have a netbook with a recently installed ubuntu server 12.04 64-bit. I have (only) installed Joomla on it (version 2.5) and now I am trying to tweak the php.ini settings so I can install jomsocial onto joomla. I have changed these settings: upload_max_filesize = 2M post_max_size = 8M to this: upload_max_filesize = 20M post_max_size = 20M in all php.ini-files in my system. Thats six files, found using 'locate': /etc/php5/apache2/php.ini /etc/php5/cli/php.ini /usr/share/doc/php5-common/examples/php.ini-development /usr/share/php5/php.ini-production /usr/share/php5/php.ini-production-dist /usr/share/php5/php.ini-production.cli I have double-checked that these files actually contain the new settings. Still, joomla does not recognize the new settings, stating that upload_max_filesize is still only 2M and post_max_size is still only 8M The same is true after a complete restart of the server. What am I doing wrong here?

    Read the article

  • Screenshot Tour: Ubuntu Touch 14.04 on a Nexus 7

    - by Chris Hoffman
    Ubuntu 14.04 LTS will “form the basis of the first commercially available Ubuntu tablets,” according to Canonical. We installed Ubuntu Touch 14.04 on our own hardware to see what those tablets will be like. We don’t recommend installing this yourself, as it’s still not a polished, complete experience. We’re using “Ubuntu Touch” as shorthand here — apparently this project’s new name is “Ubuntu For Devices.” The Welcome Screen Ubuntu’s touch interface is all about edge swipes and hidden interface elements — it has a lot in common with Windows 8, actually. You’ll see the welcome screen when you boot up or unlock a Ubuntu tablet or phone. If you have new emails, text messages, or other information, it will appear on this screen along with the time and date. If you don’t, you’ll just see a message saying “No data sources available.” The Dash Swipe in from the right edge of the welcome screen to access the Dash, or home screen. This is actually very similar to the Dash on Ubuntu’s Unity desktop. This isn’t a surprise — Canonical wants the desktop and touch versions of Ubuntu to use the same code. In the future, the desktop and touch versions of Ubuntu will use the same version of Unity and Unity will adjust its interface depending on what type of device your’e using. Here you’ll find apps you have installed and apps available to install. Tap an installed app to launch it or tap an available app to view more details and install it. Tap the My apps or Available headings to view a complete list of apps you have installed or apps you can install. Tap the Search box at the top of the screen to start searching — this is how you’d search for new apps to install. As you’d expect, a touch keyboard appears when you tap in the Search field or any other text field. The launcher isn’t just for apps. Tap the Apps heading at the top of the screen and you’ll see hidden text appear — Music, Video, and Scopes. This hidden navigation is used throughout Ubuntu’s different apps and can be easy to miss at first. Swipe to the left or right to move between these screens. These screens are also similar to the different panels in Unity on the desktop. The Scopes section allows you to view different search scopes you have installed. These are used to search different sources when you start a search from the Dash. Search from the Music or Videos scopes to search for local media files on your device or media files online. For example, searching in the Music scope will show you music results from Grooveshark by default. Navigating Ubuntu Touch Swipe in from the left edge anywhere on the system to open the launcher, a bar with shortcuts to apps. This launcher is very similar to the launcher on the left of Ubuntu’s Unity desktop — that’s the whole idea, after all. Once you’ve opened an app, you can leave the app by swiping in from the left. The launcher will appear — keep moving your finger towards the right edge of teh screen. This will swipe the current app off the screen, taking you back to the Dash. Once back on the Dash, you’ll see your open apps represented as thumbnails under Recent. Tap a thumbnail here to go back to a running app. To remove an app from here, long-press it and tap the X button that appears. Swipe in from the right edge in any app to quickly switch between recent apps. Swipe in from the right edge and hold your finger down to reveal an application switcher that shows all your recent apps and lets you choose between them. Swipe down from the top of the screen to access the indicator panel. Here you can connect to Wi-Fi networks, view upcoming events, control GPS and Bluetooth hardware, adjust sound settings, see incoming messages, and more. This panel is for quick access to hardware settings and notifications, just like the indicators on Ubuntu’s Unity desktop. The Apps System settings not included in the pull-down panel are available in the System Settings app. To access it, tap My apps on the Dash and tap System Settings, search for the System Settings app, or open the launcher bar and tap the settings icon. The settings here a bit limited compared to other operating systems, but many of the important options are available here. You can add Evernote, Ubuntu One, Twitter, Facebook, and Google accounts from here. A free Ubuntu One account is mandatory for downloading and updating apps. A Google account can be used to sync contacts and calendar events. Some apps on Ubuntu are native apps, while many are web apps. For example, the Twitter, Gmail, Amazon, Facebook, and eBay apps included by default are all web apps that open each service’s mobile website as an app. Other applications, such as the Weather, Calendar, Dialer, Calculator, and Notes apps are native applications. Theoretically, both types of apps will be able to scale to different screen resolutions. Ubuntu Touch and Ubuntu desktop may one day share the same apps, which will adapt to different display sizes and input methods. Like Windows 8 apps, Ubuntu apps hide interface elements by default, providing you with a full-screen view of the content. Swipe up from the bottom of an app’s screen to view its interface elements. For example, swiping up from the bottom of the Web Browser app reveals Back, Forward, and Refresh buttons, along with an address bar and Activity button so you can view current and recent web pages. Swipe up even more from the bottom and you’ll see a button hovering in the middle of the app. Tap the button and you’ll see many more settings. This is an overflow area for application options and functions that can’t fit on the navigation bar. The Terminal app has a few surprising Easter eggs in this panel, including a “Hack into the NSA” option. Tap it and the following text will appear in the terminal: That’s not very nice, now tracing your location . . . . . . . . . . . .Trace failed You got away this time, but don’t try again. We’d expect to see such Easter eggs disappear before Ubuntu Touch actually ships on real devices. Ubuntu Touch has come a long way, but it’s still not something you want to use today. For example, it doesn’t even have a built-in email client — you’ll have to us your email service’s mobile website. Few apps are available, and many of the ones that are are just mobile websites. It’s not a polished operating system intended for normal users yet — it’s more of a preview for developers and device manufacturers. If you really want to try it yourself, you can install it on a Wi-Fi Nexus 7 (2013), Nexus 10, or Nexus 4 device. Follow Ubuntu’s installation instructions here.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >