Search Results

Search found 11396 results on 456 pages for 'simply denis'.

Page 379/456 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • C# Multiple Property Sort

    - by Ben Griswold
    As you can see in the snippet below, sorting is easy with Linq.  Simply provide your OrderBy criteria and you’re done.  If you want a secondary sort field, add a ThenBy expression to the chain.  Want a third level sort?  Just add ThenBy along with another sort expression. var projects = new List<Project>     {         new Project {Description = "A", ProjectStatusTypeId = 1},         new Project {Description = "B", ProjectStatusTypeId = 3},         new Project {Description = "C", ProjectStatusTypeId = 3},         new Project {Description = "C", ProjectStatusTypeId = 2},         new Project {Description = "E", ProjectStatusTypeId = 1},         new Project {Description = "A", ProjectStatusTypeId = 2},         new Project {Description = "C", ProjectStatusTypeId = 4},         new Project {Description = "A", ProjectStatusTypeId = 3}     };   projects = projects     .OrderBy(x => x.Description)     .ThenBy(x => x.ProjectStatusTypeId)     .ToList();   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         project.ProjectStatusTypeId); } Linq offers a great sort solution most of the time, but what if you want or need to do it the old fashioned way? projects.Sort ((x, y) =>         Comparer<String>.Default             .Compare(x.Description, y.Description) != 0 ?         Comparer<String>.Default             .Compare(x.Description, y.Description) :         Comparer<Int32>.Default             .Compare(x.ProjectStatusTypeId, y.ProjectStatusTypeId));   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         project.ProjectStatusTypeId); } It’s not that bad, right? Just for fun, let add some additional logic to our sort.  Let’s say we wanted our secondary sort to be based on the name associated with the ProjectStatusTypeId.  projects.Sort((x, y) =>        Comparer<String>.Default             .Compare(x.Description, y.Description) != 0 ?        Comparer<String>.Default             .Compare(x.Description, y.Description) :        Comparer<String>.Default             .Compare(GetProjectStatusTypeName(x.ProjectStatusTypeId),                 GetProjectStatusTypeName(y.ProjectStatusTypeId)));   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         GetProjectStatusTypeName(project.ProjectStatusTypeId)); } The comparer will now consider the result of the GetProjectStatusTypeName and order the list accordingly.  Of course, you can take this same approach with Linq as well.

    Read the article

  • Enable Full Screen Mode in Media Center Without Trapping the Mouse

    - by DigitalGeekery
    If you have a dual monitor setup and use Windows Media Center, you’re probably aware that when WMC is in full screen mode, it traps the mouse so you can’t work on a second monitor. Here we look at how to solve the annoyance. The Maxifier is an application that allows you to open Media Center in full screen mode without restricting the mouse. It relieves the annoyance of WMC capturing your mouse on a dual monitor setup. Note: If you don’t have two monitors attached, most of The Maxifier’s functions won’t work. Installation and Use Download, extract, and install The Maxifier. (See the download link below) The Maxifier runs minimized in the system tray and you access the options by right-clicking on the icon. If Media Center is not already open, you can choose Start Media Center to start WMC on the main start screen. Or, choose one of the other selections to open another area of Media Center. By default, Maxifier opens Media Center in full screen mode on the secondary monitor. When Media Center is open in full screen mode, you’ll notice you can now freely move your mouse around your multi-monitor setup. When Media Center is open, you’ll see five additional options. The Fit Screen options simply fits Media Center to the full screen, but still show the Windows borders. Full screen options put WMC in full screen mode.   The Maxifier Options allow you to choose from the various start up options. Selecting Watch for Media Center starting will prompt Maxifier to open WMC to the main start page in full screen mode on the secondary monitor automatically, even if you open Media Center without using The Maxifier.  (You may need to restart for this to take effect) If you have more than 2 monitors, you can define on which monitor to open Media Center, and which monitor you consider to be the main screen.   You can also define a number of Hotkeys in The Maxifier settings. First, select the Enable Hotkeys checkbox. To create a Hotkey, click in the text field and then press the keys to use as the Hotkey. To remove a Hotkey, click in the field and press the Delete key.   Conclusion The Maxifier is a simple program that enables Media Center users to take full advantage of a multi-monitor workspace. It works with both Vista and Windows 7. Version 1.4 is a stable application for Vista, and Version 1.5b is a beta application for Windows 7. Looking for more Media Center tips and tweaks? Check out some startup customizations for Windows 7 Media Center, how to automatically mount and view ISO’s in WMC, and how to add background images and themes to Windows 7 Media Center. Link Download the Maxifier Similar Articles Productive Geek Tips Startup Customizations for Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Lock The Screen While in Full-Screen Mode in Windows Media PlayerSwitch Windows by Hovering the Mouse Over a Window in Windows 7 or VistaIntegrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites

    Read the article

  • SharePoint 2010 Sandboxed solution SPGridView

    - by Steve Clements
    If you didn’t know, you probably will soon, the SPGridView is not available in Sandboxed solutions. To be honest there doesn’t seem to be a great deal of information out there about the whys and what nots, basically its not part of the Sandbox SharePoint API. Of course the error message from SharePoint is about as useful as punch in the face… An unexpected error has been encountered in this Web Part.  Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. You don't have Add and Customize Pages permissions required to perform this action …that’s if you have debug=true set, if not the classic “This webpart cannot be added” !! Love that one! but will a little digging you should find something like this… [TypeLoadException: Could not load  type Microsoft.SharePoint.WebControls.SPGridView from assembly 'Microsoft.SharePoint, Version=14.900.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.]   Depending on what you want to do with the SPGridView, this may not help at all.  But I’m looking to inherit the theme of the site and style it accordingly. After spending a bit of time with Chrome’s FireBug I was able to get the required CSS classes.  I created my own class inheriting from GridView (note the lack of a preceding SP!) and simply set the styles in there. Inherit from the standard GridView public class PSGridView : GridView   Set the styles in the contructor… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   Then as you cant override the Columns property setter, a custom method to add the column and set the style… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   And that should be enough to get the nicely styled SPGridView without the need for the SPGridView, but seriously….get the SPGridView in the SandBox!!!   Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox

    Read the article

  • Anatomy of a serialization killer

    - by Brian Donahue
    As I had mentioned last month, I have been working on a project to create an easy-to-use managed debugger. It's still an internal tool that we use at Red Gate as part of product support to analyze application errors on customer's computers, and as such, should be easy to use and not require installation. Since the project has got rather large and important, I had decided to use SmartAssembly to protect all of my hard work. This was trivial for the most part, but the loading and saving of results was broken by SA after using the obfuscation, rendering the loading and saving of XML results basically useless, although the merging and error reporting was an absolute godsend and definitely worth the price of admission. (Well, I get my Red Gate licenses for free, but you know what I mean!)My initial reaction was to simply exclude the serializable results class and all of its' members from obfuscation, and that was just dandy, but a few weeks on I decided to look into exactly why serialization had broken and change the code to work with SA so I could write any new code to be compatible with SmartAssembly and save me some additional testing and changes to the SA project.In simple terms, SA does all that it can to prevent serialization problems, for instance, it will not obfuscate public members of a DLL and it will exclude any types with the Serializable attribute from obfuscation. This prevents public members and properties from being made private and having the name changed. If the serialization is done inside the executable, however, public members have the access changed to private and are renamed. That was my first problem, because my types were in the executable assembly and implemented ISerializable, but did not have the Serializable attribute set on them!public class RedFlagResults : ISerializable        {        }The second problem caused by the pruning feature. Although RedFlagResults had public members, they were not truly properties, and used the GetObjectData() method of ISerializable to serialize the members. For that reason, SA could not exclude these members from pruning and further broke the serialization. public class RedFlagResults : ISerializable        {                public List<RedFlag.Exception> Exceptions;                 #region ISerializable Members                 public void GetObjectData(SerializationInfo info, StreamingContext context)                {                                info.AddValue("Exceptions", Exceptions);                }                 #endregionSo to fix this, it was necessary to make Exceptions a proper property by implementing get and set on it. Also, I added the Serializable attribute so that I don't have to exclude the class from obfuscation in the SA project any more. The DoNotPrune attribute means I do not need to exclude the class from pruning.[Serializable, SmartAssembly.Attributes.DoNotPrune]        public class RedFlagResults        {                public List<RedFlag.Exception> Exceptions {get;set;}        }Similarly, the Exception class gets the Serializable and DoNotPrune attributes applied so all of its' properties are excluded from obfuscation.Now my project has some protection from prying eyes by scrambling up the code so it's harder to reverse-engineer, without breaking anything. SmartAssembly has also provided the benefit of merging so that the end-user doesn't need to extract all of the DLL files needed by RedFlag into a directory, and can be run directly from the .zip archive. When an error occurs (hey, I'm only human!), an exception report can be sent to me so I can see what went wrong without having to, er, debug the debugger.

    Read the article

  • Oracle Data Integration 12c: Simplified, Future-Ready, High-Performance Solutions

    - by Thanos Terentes Printzios
    In today’s data-driven business environment, organizations need to cost-effectively manage the ever-growing streams of information originating both inside and outside the firewall and address emerging deployment styles like cloud, big data analytics, and real-time replication. Oracle Data Integration delivers pervasive and continuous access to timely and trusted data across heterogeneous systems. Oracle is enhancing its data integration offering announcing the general availability of 12c release for the key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c, delivering Simplified and High-Performance Solutions for Cloud, Big Data Analytics, and Real-Time Replication. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions : Supporting Current and Emerging Initiatives Extreme Performance : Even higher performance than ever before Fast Time-to-Value : Higher IT Productivity and Simplified Solutions  With the new capabilities in Oracle Data Integrator 12c, customers can benefit from: Superior developer productivity, ease of use, and rapid time-to-market with the new flow-based mapping model, reusable mappings, and step-by-step debugger. Increased performance when executing data integration processes due to improved parallelism. Improved productivity and monitoring via tighter integration with Oracle GoldenGate 12c and Oracle Enterprise Manager 12c. Improved interoperability with Oracle Warehouse Builder which enables faster and easier migration to Oracle Data Integrator’s strategic data integration offering. Faster implementation of business analytics through Oracle Data Integrator pre-integrated with Oracle BI Applications’ latest release. Oracle Data Integrator also integrates simply and easily with Oracle Business Analytics tools, including OBI-EE and Oracle Hyperion. Support for loading and transforming big and fast data, enabled by integration with big data technologies: Hadoop, Hive, HDFS, and Oracle Big Data Appliance. Only Oracle GoldenGate provides the best-of-breed real-time replication of data in heterogeneous data environments. With the new capabilities in Oracle GoldenGate 12c, customers can benefit from: Simplified setup and management of Oracle GoldenGate 12c when using multiple database delivery processes via a new Coordinated Delivery feature for non-Oracle databases. Expanded heterogeneity through added support for the latest versions of major databases such as Sybase ASE v 15.7, MySQL NDB Clusters 7.2, and MySQL 5.6., as well as integration with Oracle Coherence. Enhanced high availability and data protection via integration with Oracle Data Guard and Fast-Start Failover integration. Enhanced security for credentials and encryption keys using Oracle Wallet. Real-time replication for databases hosted on public cloud environments supported by third-party clouds. Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c and other Oracle technologies, such as Oracle Database 12c and Oracle Applications, provides a number of benefits for organizations: Tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training. Integration with Oracle Database 12c provides a strong foundation for seamless private cloud deployments. Delivers real-time data for reporting, zero downtime migration, and improved performance and availability for Oracle Applications, such as Oracle E-Business Suite and ATG Web Commerce . Oracle’s data integration offering is optimized for Oracle Engineered Systems and is an integral part of Oracle’s fast data, real-time analytics strategy on Oracle Exadata Database Machine and Oracle Exalytics In-Memory Machine. Oracle Data Integrator 12c and Oracle GoldenGate 12c differentiate the new offering on data integration with these many new features. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. Find out much more about the new release in the video webcast "Introducing 12c for Oracle Data Integration", where customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Resource Kits Meet Oracle Data Integration 12c  Discover what's new with Oracle Goldengate 12c  Oracle EMEA DIS (Data Integration Solutions) Partner Community is available for all your questions, while additional partner focused webcasts will be made available through our blog here, so stay connected. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Also, I asked myself if my impression is just plainly wrong due to lack of knowledge. E.g., do C# and C++11 support FP as extensively as, say, Scala or Caml do? In this case, my question would be simply non-existent. Or can it be that many non-FP programmers are not really interested in using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Make Text and Images Easier to Read with the Windows 7 Magnifier

    - by DigitalGeekery
    Do you have impaired vision or find it difficult to read small print on your computer screen? Today, we’ll take a closer look at how to magnify that hard to read content with the Magnifier in Windows 7. Magnifier was available in previous versions of Windows, but the Windows 7 version comes with some notable improvements. There are now three screen modes in Magnifier. Full Screen and Lens mode, however, require Windows Aero to be enabled. If your computer doesn’t support Aero, or if you’re not using am Aero theme, Magnifier will only work in Docked mode. Using Magnifier in Windows 7 You can find the Magnifier by going to Start > All Programs > Accessories > Ease of Access > Magnifier.   Alternately, you can type magnifier into the Search box in the Start Menu and hit Enter. On the Magnifier toolbar, choose your View mode by clicking Views and choosing from the available options. Clicking the plus (+) and minus (-) buttons will zoom in or zoom out. You can change the zoom in/out percentage by adjusting the slider bar. You can also enable color inversion and select tracking options. Click OK when finished to save your settings.   After a brief period, the Magnifier Toolbar will switch to a magnifying glass icon. Simply click the magnifying glass to display the Magnifier Toolbar again.   Docked Mode In Docked mode, a portion of the screen is magnified and docked at the top of the screen. The rest of your desktop will remain in it’s normal state. You can then control which area of the screen is magnified by moving your mouse.   Full Screen Mode This magnifies your entire screen and follows your mouse as you move it around. If you loose track of where you are on the screen, use the Ctrl + Alt + Spacebar shortcut to preview where your mouse pointer is on the screen.   Lens Mode The Lens screen mode is similar to holding a magnifying glass up to your screen. Full screen mode magnifies the area around the mouse. The magnified area moves around the screen with your mouse.    Shortcut Keys Windows key + (+) to zoom in Windows key + (-) to zoom out Windows key + ESC to exit Ctrl + Alt + F – Full screen mode Ctrl + Alt + L – Lens mode Ctrl + Alt + D – Dock mode Ctrl + Alt + R – Resize the lens Ctrl + Alt + Spacebar – Preview full screen Conclusion Windows Magnifier is a nice little tool if you have impaired vision or just need to make items on the screen easier to read. Similar Articles Productive Geek Tips New Features in WordPad and Paint in Windows 7How-To Geek on Lifehacker: How to Make Windows Vista Less AnnoyingUsing Comments in Word 2007 DocumentsMake Your PC Look Like Windows Phone 7Use Image Placeholders to Display Documents Faster in Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Unreal Tournament 3 vs UDK: What Should I Choose?

    - by Matt Christian
    Many people in the mod community were very excited to see the release of the Unreal Developer Kit (UDK) a few months ago.  Along with generating excitement into a very dedicated community, it also introduced many new modders into a flourishing area of indie-development.  However, since UDK is free, most beginners jump right into UDK, which is OK though you might just benefit more from purchasing a shelf-copy of Unreal Tournament 3. UDK UDK is a free full version of UnrealEd (the editor environment used to create games like Gears of War 1/2, Bioshock 1/2, and of course Unreal Tournament 3).  The editor gives you all the features of the editor from the shelf-copy of the game plus some refinements in many of the tools.  (One of the first things you'll find about UnrealEd is that it's a collection of tools grouped into the same editor so it really isn't a single 'tool') Interestingly enough, Epic is allowing you to sell any game made in UDK with a few catches.  First off, you must purchase a liscense for your game (which, I THINK is aproximately $99 starting).  Secondly, you must pay 25% of all profits for the first $5,000 of your game revenue to them (about $1250).  Finally, you cannot use any of the 'media' provided in UDK for your game.  UDK provides sample meshes, textures, materials, sounds, and other sample pieces of media pulled (mostly) from Unreal Tournament 3. The final point here will really determine whether you should use UDK.  There is a very small amount of media provided in UDK for someone to go in and begin creating levels without first developing your own meshes, textures, and other media.  Sure, you can slap together a few unique levels, though you will end up finding yourself restriced to the same items over and over and over.  This is absolutely how professional game development is; you are 'given' (typically liscensed or built in-house) an engine/editor and you begin creating all the content for the game and placing it.  UDK is aimed toward those who really want to build their game content from scratch with a currently existing engine.  It is not suited for someone who would like to simply build levels and quick mods without learning external 3D programs and image editing software. Unreal Tournament 3 Unless you have a serious grudge against FPS's, Epic, or your computer sucks, there really is no reason not to own this game for PC.  You can pick it up on Steam or Amazon for around $20 brand new.  Not only are you provided with a full single-player and multiplayer game, but you are given the entire UnrealEd 3.0 including all of the content used to build UT3.  If you want to start building levels and mods quickly for UT3, you should absolutely pick up a shelf-copy. However, as off-the-shelf UT3 is a few years old now, the tools have not been updated for quite a while.  Compared to UDK, the menus are more difficult to navigate through and take more time getting used to.  Since UDK is updated almost every month, there are new inclusions to the editor that may not be in UT3 (including the future addition of 3D!).  I haven't worked enough with shelf UT3 to see if there are more features in UDK or if they both feature the same stuff in different forms, however you should remember that the Unreal Engine 3.0 has undergone numerous upgrades between it's launch and Gears of War 2 (in fact, Epic had a conference to show off what changed just between the Gears of Wars games). Since UT3 has much more core content, someone who wants to focus on level editing or modding the core UT3 game may find their needs better suited with an off-the-shelf copy of UT3.  If that level designer has a team that is generating custom assets, they may be better off with UDK. The choice is now yours...

    Read the article

  • Enable Multi-Column Google Searches with a User Script

    - by Asian Angel
    Are you wanting to improve the search results view at Google and make better use of the webpage space? With a little user script magic you can make those search results look and fit better in your favorite browser. Note: This user script may conflict with the AutoPager extension if you have it installed in your favorite browser. Before Here is the standard single column view of search results at Google. Not too bad but the available space could certainly be better utilized. Note: For the purposes of our example we are using Google Chrome but this user script can be easily added to other browsers. After If you have never installed a user script in Chrome before it is just as simple as the regular extensions at the official Google website. Here you can see the details for the user script we are installing. Notice that you can view the source code if desired. To add the user script to Chrome click on “Install”. Once you start the install process you will see an intermediary message asking if you wish to continue in the lower left corner of your browser. Click “Continue” to move to the next step in the install process. From this point on the install process is practically identical to the official extensions. You can see the final confirmation window here…click “Install” to finish adding the user script to Chrome. As with regular extensions you will see a post-install message in the upper right corner. So, what does a user script look like in the “Extensions Page”? You can see the user script entry here…outside of an icon it looks rather identical to a normal extension. After refreshing the search page shown above we now have two columns of search results (default setting). This looks much much better than a single column view and there is little to no page scrolling required now. To switch to a three column view simply use the keyboard shortcut “Alt + 3”. To return to a single column view use “Alt + 1” and for the default two column view use “Alt + 2”. Three keyboard shortcuts for three different views…definitely a good thing. Note: On our test system we needed to use the number keys at the top of our keyboard to switch views…this is most likely the result of unique settings on our test system. Conclusion If you are wanting a better viewing experience when conducting searches at Google then this user script will make a very nice addition to your favorite browser. For those using Firefox you can add user scripts with the Greasemonkey & Stylish extensions. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the Multi-Column View of Google Search Results User Script Similar Articles Productive Geek Tips Hide Flash Animations in Google ChromeEnable Google Search From Shortcut Key in KDE on (k)UbuntuSet Gmail as Default Mail Client in UbuntuSet Up User Scripts in Opera BrowserHow To Enable Favicons for Google Reader Subscriptions TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Extend Your Applications Your Way: Oracle OpenWorld Live Poll Results

    - by Applications User Experience
    Lydia Naylor, Oracle Applications User Experience Manager At OpenWorld 2012, I attended one of our team’s very exciting sessions: “Extend Your Applications, Your Way”. It was clear that customers were engaged by the topics presented. Not only did we see many heads enthusiastically nodding in agreement during the presentation, and witness a large crowd surround our speakers Killian Evers, Kristin Desmond and Greg Nerpouni afterwards, but we can prove it…with data! Figure 1. Killian Evers, Kristin Desmond, and Greg Nerpouni of Oracle at the OOW 2012 session. At the beginning of our OOW 2012 journey, Greg Nerpouni, Fusion HCM Principal Product Manager, told me he really wanted to get feedback from the audience on our extensibility direction. Initially, we were thinking of doing a group activity at the OOW UX labs events that we hold every year, but Greg was adamant- he wanted “real-time” feedback. So, after a little tinkering, we came up with a way to use an online survey tool, a simple QR code (Quick Response code: a matrix barcode that can include information like URLs and can be read by mobile device cameras), and the audience’s mobile devices to do just that. Figure 2. Actual QR Code for survey Prior to the session, we developed a short survey in Vovici (an online survey tool), with questions to gather feedback on certain points in the presentation, as well as demographic data from our participants. We used Vovici’s feature to generate a mobile HTML version of the survey. At the session, attendees accessed the survey by simply scanning a QR code or typing in a TinyURL (a shorthand web address that is easily accessible through mobile devices). Killian, Kristin and Greg paused at certain points during the session and asked participants to answer a few survey questions about what they just presented. Figure 3. Session survey deployed on a mobile phone The nice thing about Vovici’s survey tool is that you can see the data real-time as participants are responding to questions - so we knew during the session that not only was our direction on track but we were hitting the mark and fulfilling Greg’s request. We planned on showing the live polling results to the audience at the end of the presentation but it ran just a little over time, and we were gently nudged out of the room by the session attendants. We’ve included a quick summary below and this link to the full results for your enjoyment. Figure 4. Most important extensions to Fusion Applications So what did participants think of our direction for extensibility? A total of 94% agreed that it was an improvement upon their current process. The vast majority, 80%, concurred that the extensibility model accounts for the major roles involved: end user, business systems analyst and programmer. Attendees suggested a few supporting roles such as systems administrator, data architect and integrator. Customers and partners in the audience verified that Oracle‘s Fusion Composers allow them to make changes in the most common areas they need to: user interface, business processes, reporting and analytics. Integrations were also suggested. All top 10 things customers can do on a page rated highly in importance, with all but two getting an average rating above 4.4 on a 5 point scale. The kinds of layout changes our composers allow customers to make align well with customers’ needs. The most common were adding columns to a table (94%) and resizing regions and drag and drop content (both selected by 88% of participants). We want to thank the attendees of the session for allowing us another great opportunity to gather valuable feedback from our customers! If you didn’t have a chance to attend the session, we will provide a link to the OOW presentation when it becomes available.

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Experience the iPad UI On Your PC

    - by Matthew Guay
    Want to test drive iPad without heading over to an Apple store?  Here’s a way you can experience some of the iPad UI straight from your browser! The iPad is the latest gadget from Apple to wow the tech world, and people even waited in line all night to be one of the first to get their hands on one.  Thanks to a simple JavaScript trick, however, you can get a feel for some of its new features without leaving your computer.  This won’t let you try out everything on the iPad, but it will let you see how the new lists and pop-over menus work just like they do in the new apps. Test drive the iPad’s UI from your browser Normally, the Apple iPhone developer library online looks like a standard webpage. But, on the iPad, it looks and feels like a full-blown native iPad app.  With a nifty JavaScript trick from boredzo.org you can use this same interface on your PC.  Since the iPad uses the Safari browser, we ran this test in Safari for Windows.  If you don’t already have it installed, you can download it from Apple (link below) and setup as normal. Now, open Safari and browse to Apple’s developer page at: http://www.developer.apple.com   Now, enter the following in the address bar, and press Enter. javascript:localStorage.setItem('debugSawtooth', 'true')   Finally, click this link to go to the iPhone OS documentation. http://developer.apple.com/iphone/library/iPad/ After a short delay, it should open in full iPad style! The left menu works just like the menus on the iPad, complete with transitions.  It feels entirely like a native application, instead of a webpage.  To scroll through text, click and pull up or down similar to the way you would use it on a touch screen. Some pages even include a pop-over menu like many of the new iPad apps use. Note that the page will be rendered for the size of your browser, and if you resize your window the page will not resize with it.  Simply press F5 to reload the page, and it will resize to fit the new window size.  If you resize your window to be tall and narrow, like the iPad in horizontal mode, the webpage will change and the left menu will disappear in lieu of a drop-down menu just like it would if you rotated the iPad. This works in Chrome as well, since it, like Safari, is based on Webkit.  However, it didn’t seem to work in our test on Firefox or other browsers. We’ve previously covered how you can experience some of the iPhone’s UI with the online iPhone user guide.  Check it out if you haven’t yet: View Mobile Websites in Windows with Safari 4 Developer Tools Conclusion Although this doesn’t let you really try out all of the iPad’s interface, it at least gives you a taste of how it works.  It’s exciting to see how much functionality can be packed into webapps today.  And don’t forget, How-to Geek is giving away an iPad to a random fan!  Head over to our Facebook page and fan How-to Geek if you haven’t already done so. Win an iPad on the How-To Geek Facebook Fan Page Similar Articles Productive Geek Tips Want an iPad? How-To Geek is Giving One Away!Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]The Complete List of iPad Tips, Tricks, and TutorialsShare Your Windows Vista Experience Index ScoreAnother Blog You Should Subscribe To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Creating and using VM Groups in VirtualBox

    - by Fat Bloke
    With VirtualBox 4.2 we introduced the Groups feature which allows you to organize and manage your guest virtual machines collectively, rather than individually. Groups are quite a powerful concept and there are a few nice features you may not have discovered yet, so here's a bit more information about groups, and how they can be used.... Creating a group Groups are just ad hoc collections of virtual machines and there are several ways of creating a group: In the VirtualBox Manager GUI: Drag one VM onto another to create a group of those 2 VMs. You can then drag and drop more VMs into that group; Select multiple VMs (using Ctrl or Shift and click) then  select the menu: Machine...Group; or   press Cmd+U (Mac), or Ctrl+U(Windows); or right-click the multiple selection and choose Group, like this: From the command line: Group membership is an attribute of the vm so you can modify the vm to belong in a group. For example, to put the vm "Ubuntu" into the group "TestGroup" run this command: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup" Deleting a Group Groups can be deleted by removing a group attribute from all the VMs that constitute that group. To do this via the command-line the syntax is: VBoxManage modifyvm "Ubuntu" --groups "" In the VirtualBox Manager, this is more easily done by right-clicking on a group header and selecting "Ungroup", like this: Multiple Groups Now that we understand that Groups are just attributes of VMs, it can be seen that VMs can exist in multiple groups, for example, doing this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup","/ProjectX","/ProjectY" Results in: Or via the VirtualBox Manager, you can drag VMs while pressing the Alt key (Mac) or Ctrl (other platforms). Nested Groups Just like you can drag VMs around in the VirtualBox Manager, you can also drag whole groups around. And dropping a group within a group creates a nested group. Via the command-line, nested groups are specified using a path-like syntax, like this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup/Linux" ...which creates a sub-group and puts the VM in it. Navigating Groups In the VirtualBox Manager, Groups can be collapsed and expanded by clicking on the carat to the left in the Group Header. But you can also Enter and Leave groups too, either by using the right-arrow/left-arrow keys, or by clicking on the carat on the right hand side of the Group Header, like this: . ..leading to a view of just the Group contents. You can Leave or return to the parent in the same way. Don't worry if you are imprecise with your clicking, you can use a double click on the entire right half of the Group Header to Enter a group, and the left half to Leave a group. Double-clicking on the left half when you're at the top will roll-up or collapse the group.   Group Operations The real power of Groups is not simply in arranging them prettily in the Manager. Rather it is about performing collective operations on them, once you have grouped them appropriately. For example, let's say that you are working on a project (Project X) where you have a solution stack of: Database VM, Middleware/App VM, and  a couple of client VMs which you use to test your app. With VM Groups you can start the whole stack with one operation. Select the Group Header, and choose Start: The full list of operations that may be performed on Groups are: Start Starts from any state (boot or resume) Start VMs in headless mode (hold Shift while starting) Pause Reset Close Save state Send Shutdown signal Poweroff Discard saved state Show in filesystem Sort Conclusion Hopefully we've shown that the introduction of VM Groups not only makes Oracle VM VirtualBox pretty, but pretty powerful too.  - FB 

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • Having fun with Reflection

    - by Nick Harrison
    I was once asked in a technical interview what I could tell them about Reflection.   My response, while a little tongue in cheek was that "I can tell you it is one of my favorite topics to talk about" I did get a laugh out of that and it was a great ice breaker.    Reflection may not be the answer for everything, but it often can be, or maybe even should be.     I have posted in the past about my favorite CopyTo method.   It can come in several forms and is often very useful.   I explain it further and expand on the basic idea here  The basic idea is to allow reflection to loop through the properties of two objects and synchronize the ones that are in common.   I love this approach for data binding and passing data across the layers in an application. Recently I have been working on a project leveraging Data Transfer Objects to pass data through WCF calls.   We won't go into how the architecture got this way, but in essence there is a partial duplicate inheritance hierarchy where there is a related Domain Object for each Data Transfer Object.     The matching objects do not share a common ancestor or common interface but they will have the same properties in common.    By passing the problems with this approach, let's talk about how Reflection and our friendly CopyTo could make the most of this bad situation without having to change too much. One of the problems is keeping the two sets of objects in synch.   For this particular project, the DO has all of the functionality and the DTO is used to simply transfer data back and forth.    Both sets of object have parallel hierarchies with the same properties being defined at the corresponding levels.   So we end with BaseDO,  BaseDTO, GenericDO, GenericDTO, ProcessAreaDO,  ProcessAreaDTO, SpecializedProcessAreaDO, SpecializedProcessAreaDTO, TableDo, TableDto. and so on. Without using Reflection and a CopyTo function, tremendous care and effort must be made to keep the corresponding objects in synch.    New properties can be added at any level in the inheritance and must be kept in synch at all subsequent layers.    For this project we have come up with a clever approach of calling a base GetDo or UpdateDto making sure that the same method at each level of inheritance is called.    Each level is responsible for updating the properties at that level. This is a lot of work and not keeping it in synch can create all manner of problems some of which are very difficult to track down.    The other problem is the type of code that this methods tend to wind up with. You end up with code like this: Transferable dto = new Transferable(); base.GetDto(dto); dto.OfficeCode = GetDtoNullSafe(officeCode); dto.AccessIndicator = GetDtoNullSafe(accessIndicator); dto.CaseStatus = GetDtoNullSafe(caseStatus); dto.CaseStatusReason = GetDtoNullSafe(caseStatusReason); dto.LevelOfService = GetDtoNullSafe(levelOfService); dto.ReferralComments = referralComments; dto.Designation = GetDtoNullSafe(designation); dto.IsGoodCauseClaimed = GetDtoNullSafe(isGoodCauseClaimed); dto.GoodCauseClaimDate = goodCauseClaimDate;       One obvious problem is that this is tedious code.   It is error prone code.    Adding helper functions like GetDtoNullSafe help out immensely, but there is still an easier way. We can bypass the tedious code, by pass the complex inheritance tricks, and reduce all of this to a single method in the base class. TransferObject dto = new TransferObject(); CopyTo (this, dto); return dto; In the case of this one project, such a change eliminated the need for 20% of the total code base and a whole class of unit test cases that made sure that all of the properties were in synch. The impact of such a change also needs to include the on going time savings and the improvements in quality that can arise from them.    Developers who are not worried about keeping the properties in synch across mirrored object hierarchies are freed to worry about more important things like implementing business requirements.

    Read the article

  • How to develop RPG Damage Formulas?

    - by user127817
    I'm developing a classical 2d RPG (in a similar vein to final fantasy) and I was wondering if anyone had some advice on how to do damage formulas/links to resources/examples? I'll explain my current setup. Hopefully I'm not overdoing it with this question, and I apologize if my questions is too large/broad My Characters stats are composed of the following: enum Stat { HP = 0, MP = 1, SP = 2, Strength = 3, Vitality = 4, Magic = 5, Spirit = 6, Skill = 7, Speed = 8, //Speed/Agility are the same thing Agility = 8, Evasion = 9, MgEvasion = 10, Accuracy = 11, Luck = 12, }; Vitality is basically defense to physical attacks and spirit is defense to magic attacks. All stats have fixed maximums (9999 for HP, 999 for MP/SP and 255 for the rest). With abilities, the maximums can be increased (99999 for HP, 9999 for HP/SP, 999 for the rest) with typical values (at level 100) before/after abilities+equipment+etc will be 8000/20,000 for HP, 800/2000 for SP/MP, 180/350 for other stats Late game Enemy HP will typically be in the lower millions (with a super boss having the maximum of ~12 million). I was wondering how do people actually develop proper damage formulas that scale correctly? For instance, based on this data, using the damage formulas for Final Fantasy X as a base looked very promising. A full reference here http://www.gamefaqs.com/ps2/197344-final-fantasy-x/faqs/31381 but as a quick example: Str = 127, 'Attack' command used, enemy Def = 34. 1. Physical Damage Calculation: Step 1 ------------------------------------- [{(Stat^3 ÷ 32) + 32} x DmCon ÷16] Step 2 ---------------------------------------- [{(127^3 ÷ 32) + 32} x 16 ÷ 16] Step 3 -------------------------------------- [{(2048383 ÷ 32) + 32} x 16 ÷ 16] Step 4 --------------------------------------------------- [{(64011) + 32} x 1] Step 5 -------------------------------------------------------- [{(64043 x 1)}] Step 6 ---------------------------------------------------- Base Damage = 64043 Step 7 ----------------------------------------- [{(Def - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------ [{(34 - 280.4)^2} ÷ 110] + 16 Step 9 ------------------------------------------------- [(-246)^2) ÷ 110] + 16 Step 10 ---------------------------------------------------- [60516 ÷ 110] + 16 Step 11 ------------------------------------------------------------ [550] + 16 Step 12 ---------------------------------------------------------- DefNum = 566 Step 13 ---------------------------------------------- [BaseDmg * DefNum ÷ 730] Step 14 --------------------------------------------------- [64043 * 566 ÷ 730] Step 15 ------------------------------------------------------ [36248338 ÷ 730] Step 16 ------------------------------------------------- Base Damage 2 = 49655 Step 17 ------------ Base Damage 2 * {730 - (Def * 51 - Def^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ---------------------- 49655 * {730 - (34 * 51 - 34^2 ÷ 11) ÷ 10} ÷ 730 Step 19 ------------------------- 49655 * {730 - (1734 - 1156 ÷ 11) ÷ 10} ÷ 730 Step 20 ------------------------------- 49655 * {730 - (1734 - 105) ÷ 10} ÷ 730 Step 21 ------------------------------------- 49655 * {730 - (1629) ÷ 10} ÷ 730 Step 22 --------------------------------------------- 49655 * {730 - 162} ÷ 730 Step 23 ----------------------------------------------------- 49655 * 568 ÷ 730 Step 24 -------------------------------------------------- Final Damage = 38635 I simply modified the dividers to include the attack rating of weapons and the armor rating of armor. Magic Damage is calculated as follows: Mag = 255, Ultima is used, enemy MDef = 1 Step 1 ----------------------------------- [DmCon * ([Stat^2 ÷ 6] + DmCon) ÷ 4] Step 2 ------------------------------------------ [70 * ([255^2 ÷ 6] + 70) ÷ 4] Step 3 ------------------------------------------ [70 * ([65025 ÷ 6] + 70) ÷ 4] Step 4 ------------------------------------------------ [70 * (10837 + 70) ÷ 4] Step 5 ----------------------------------------------------- [70 * (10907) ÷ 4] Step 6 ------------------------------------ Base Damage = 190872 [cut to 99999] Step 7 ---------------------------------------- [{(MDef - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------- [{(1 - 280.4)^2} ÷ 110] + 16 Step 9 ---------------------------------------------- [{(-279.4)^2} ÷ 110] + 16 Step 10 -------------------------------------------------- [(78064) ÷ 110] + 16 Step 11 ------------------------------------------------------------ [709] + 16 Step 12 --------------------------------------------------------- MDefNum = 725 Step 13 --------------------------------------------- [BaseDmg * MDefNum ÷ 730] Step 14 --------------------------------------------------- [99999 * 725 ÷ 730] Step 15 ------------------------------------------------- Base Damage 2 = 99314 Step 16 ---------- Base Damage 2 * {730 - (MDef * 51 - MDef^2 ÷ 11) ÷ 10} ÷ 730 Step 17 ------------------------ 99314 * {730 - (1 * 51 - 1^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ------------------------------ 99314 * {730 - (51 - 1 ÷ 11) ÷ 10} ÷ 730 Step 19 --------------------------------------- 99314 * {730 - (49) ÷ 10} ÷ 730 Step 20 ----------------------------------------------------- 99314 * 725 ÷ 730 Step 21 -------------------------------------------------- Final Damage = 98633 The problem is that the formulas completely fall apart once stats start going above 255. In particular Defense values over 300 or so start generating really strange behavior. High Strength + Defense stats lead to massive negative values for instance. While I might be able to modify the formulas to work correctly for my use case, it'd probably be easier just to use a completely new formula. How do people actually develop damage formulas? I was considering opening excel and trying to build the formula that way (mapping Attack Stats vs. Defense Stats for instance) but I was wondering if there's an easier way? While I can't convey the full game mechanics of my game here, might someone be able to suggest a good starting place for building a damage formula? Thanks

    Read the article

  • Customize Entity Framework SSDL &amp; SQL Generation

    - by Dane Morgridge
    In almost every talk I have done on Entity Framework I get questions on how to do custom SSDL or SQL when using model first development.  Quite a few of these questions have required custom changes to the SSDL, which of course can be a problem if it is getting auto generated.  Luckily, there is a tool that can help.  In the Visual Studio Gallery on MSDN, there is the Entity Designer Database Generation Power Pack. You have the ability to select different generation strategies and it also allows you to inject custom T4 Templates into the generation workflow so that you can customize the SSDL and SQL generation.  When you select to generate a database from a model the dialog is replaced by one with more options:   You can clone the individual workflow for either the current project or current machine.  The templates are installed at “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen” on my local machine and you can make a copy of any template there.  If you clone the strategy and open it up, you will get the following workflow: Each item in the sequence is defining the execution of a T4 template.  The XAML for the workflow is listed below so you can see where the T4 files are defined.  You can simply make a copy of an existing template and make what ever changes you need.   1: <Activity x:Class="GenerateDatabaseScriptWorkflow" ... > 2: <x:Members> 3: <x:Property Name="Csdl" Type="InArgument(sde:EdmItemCollection)" /> 4: <x:Property Name="ExistingSsdl" Type="InArgument(s:String)" /> 5: <x:Property Name="ExistingMsl" Type="InArgument(s:String)" /> 6: <x:Property Name="Ssdl" Type="OutArgument(s:String)" /> 7: <x:Property Name="Msl" Type="OutArgument(s:String)" /> 8: <x:Property Name="Ddl" Type="OutArgument(s:String)" /> 9: <x:Property Name="SmoSsdl" Type="OutArgument(ss:SsdlServer)" /> 10: </x:Members> 11: <Sequence> 12: <dbtk:ProgressBarStartActivity /> 13: <dbtk:CsdlToSsdlTemplateActivity SsdlOutput="[Ssdl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToSSDL_TPT.tt" /> 14: <dbtk:CsdlToMslTemplateActivity MslOutput="[Msl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToMSL_TPT.tt" /> 15: <ded:SsdlToDdlActivity ExistingSsdlInput="[ExistingSsdl]" SsdlInput="[Ssdl]" DdlOutput="[Ddl]" /> 16: <dbtk:GenerateAlterSqlActivity DdlInputOutput="[Ddl]" DeployToScript="True" DeployToDatabase="False" /> 17: <dbtk:ProgressBarEndActivity ClosePopup="true" /> 18: </Sequence> 19: </Activity>   So as you can see, this tool enables you to make some pretty heavy customizations to how the SSDL and SQL get generated.  You can get more info and the tool can be downloaded from: http://visualstudiogallery.msdn.microsoft.com/en-us/df3541c3-d833-4b65-b942-989e7ec74c87.  There is a comments section on the site so make sure you let the team know what you like and what you don’t like.  Enjoy!

    Read the article

  • Unable to either locate any wireless networks nor even connect to wifi

    - by Leo Chan
    I'm new to Linux. I currently have installed ubuntu 12.10. I had a previous problem with my wireless card (see url to see previous problem : How to enable wireless in a Fujitsu LH532?). It now shows Connect to hidden network and create new wireless network but now unfortunately it simply cannot find any wireless connections. I did have a very thorough look around about this problem such as wait a little longer since sometimes it cannot load all the wireless connections available that quickly. My wifi is a hidden network and I have used the connect to hidden network feature but it keeps asking for my wep key which has been checked 4 times (I counted) and it still seems to not work; It keeps asking for the WEP key. I did try both WEP 40/128-bit key and WPA & WPA2 since previously on my windows it worked; My family later decided to use WEP. I only have a quick fix using a usb wireless stick and I wish to have a more solid fix. Thanks Results from sudo iwlist wlan0 scan wlan0 Scan completed : Cell 01 - Address: 00:1E:73:C8:62:BD Channel:6 Frequency:2.437 GHz (Channel 6) Quality=25/70 Signal level=-85 dBm Encryption key:on ESSID:"EnigmaHome" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000cb3bb10a5c Extra: Last beacon: 696ms ago IE: Unknown: 000A456E69676D61486F6D65 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 0706484B20010B1E IE: Unknown: 2A0107 IE: Unknown: 32080C1218243048606C IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 Cell 02 - Address: C8:3A:35:34:C1:60 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=22/70 Signal level=-88 dBm Encryption key:on ESSID:"Tenda" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 12 Mb/s; 24 Mb/s; 48 Mb/s Mode:Master Extra:tsf=000001336e70ffdd Extra: Last beacon: 716ms ago IE: Unknown: 000554656E6461 IE: Unknown: 010882848B961224486C IE: Unknown: 030106 IE: Unknown: 32040C183060 IE: Unknown: 0706434E20010D10 IE: Unknown: 33082001020304050607 IE: Unknown: 33082105060708090A0B IE: Unknown: DD270050F204104A0001101044000101104700102880288028801880A880C83A3534C160103C000101 IE: Unknown: 050400010000 IE: Unknown: 2A0106 IE: Unknown: 2D1AEC0117FFFF0000000000000000000000000000000C0000000000 IE: Unknown: 3D1606000500000000000000000000000000000000000000 IE: Unknown: 7F0101 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK Preauthentication Supported IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 IE: Unknown: 0B05010089127A IE: Unknown: DD1E00904C33EC0117FFFF0000000000000000000000000000000C0000000000 IE: Unknown: DD1A00904C3406000500000000000000000000000000000000000000 IE: Unknown: DD07000C4304000000 Cell 03 - Address: 00:1E:73:C8:62:BF Channel:6 Frequency:2.437 GHz (Channel 6) Quality=47/70 Signal level=-63 dBm Encryption key:on ESSID:"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000cb3bac614e Extra: Last beacon: 1064ms ago IE: Unknown: 00110000000000000000000000000000000000 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 050C010200000000000000000000 IE: Unknown: 0706484B20010B1E IE: Unknown: 2A0107 IE: Unknown: 32080C1218243048606C IE: Unknown: DD070050F202000100

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >