Search Results

Search found 1325 results on 53 pages for 'factor'.

Page 35/53 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Browser-Incompatability with image alignment in CSS using YUI grid (Firefox + Opera)

    - by Rotimi
    I'm having trouble with the alignment of two images on the footer of my temporary website (http://www.rotimioyewole.com). I'm new to the YUI grid, which I think may be a factor. It should look roughly like this (works correctly in Chrome and Safari, haven't tested IE yet): (http://cl.ly/44fH) But on FF and Opera look like this: http://cl.ly/44aO If I can have some sort of consistency then the website would at least be presentable. Ideally, I would also like to align both images on the same Y axis, as well as the text next to the icons. I had trouble figuring out how to search for a solution..can anybody help me? Thanks in advance

    Read the article

  • postback __EVENTTARGET ID in Page.Request.Form has $ not _ in ASP.NET

    - by Industry86
    I am using the ToolkitScriptManager from the Ajax tool kit and I am having a problem finding my button's ID. My ID's on my controls come back with $ symbols instead of _ symbols, like the following: Grid$ctl06$insertButton This obviously causes problems when attempting to find the control from the Page.Request.Form keys. I cannot seem to find the determining factor that would cause this. Now, I know this is the name and in my source I see that the ID is with the _, so why is the Page.Request.Form showing up with the $ symbol instead? Anybody encounter this before?

    Read the article

  • How to customize notches in ggplot boxplot

    - by cjy8709
    I had a question on how to change/customize the upper and lower limit of a notch on a boxplot created by ggplot2. I looked through the function stat_boxplot and found that ggplot calculates the notch limits with the equation median +/- 1.58 * iqr / sqrt(n). However instead of that equation I wanted to change it with my own set of upper and lower notch limits. My data has 4 factors and for each factor I calculated the median and did a bootstrap to get a 95% confidence interval of that median. Thus in the end I would like to change every boxplot to have its own unique notch upper and lower limit. I'm not sure if this is even possible in ggplot and was wondering if people have an idea on how to do this? Thanks again!

    Read the article

  • C# managed dll call or unmanaged dll call?

    - by 5YrsLaterDBA
    I was asking to do two dll calls from our application. These two dlls are from other group and other company. Have read a little about managed and unmanaged. I would prefer to do managed call. But whether use managed or unmanaged is the decision of the caller only or it also depends on the callee? All dlls can be called with managed code? If callee is also a factor, how can I know this dll can be called with managed code?

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • fastest c++ file compression library available?

    - by Fabien Hure
    I have a need to find the best library to compress in memory data. I am currently using zlib but I am wondering if there is a better compression library; better in terms of performance and memory footprint. It should be able to handle multiple files in the same archive. I am looking for a C/C++ library. Performance is the key factor. The files that are being compressed are small to large XML files.

    Read the article

  • Multiply with negative integer just by shifting.

    - by stex
    Hi, I'm trying to find a way to multiply an integer value with negative value just with bit shifting. Usually I do this by shifting with the power of 2 which is closest to my factor and just adding / subtracting the rest, e.g. x * 7 = ((x << 3) - x) Let's say I'd want to calculate x * -112. The only way I can imagine is -((x << 7) - (x << 4), so to calculate x * 112 and negate it afterwards. Is there a "prettier" way to do this?

    Read the article

  • can i use perl to download files in a php script?

    - by jision
    i have a file sharing website made in php which gives users to download files.But as i am using a hosted server there is limitation on time out and other factors that come to play while for a download script coded in php. This causes large files to get corrupted while downloading. i am looking for a solution for this i have tried using symbolic links but the force download factor is not taking place.... i am thinking of using perl to download files bt dont have any clue what so ever.... can any one help me outwith this problem???

    Read the article

  • Extract rows for the first occurrence of a variable in a data frame

    - by user2614883
    I have a data frame with two variables, Date and Taxa and want to get the date for the first time each taxa occurs. There are 9 different dates and 40 different taxa in the data frame consisting of 172 rows, but my answer should only have 40 rows. Taxa is a factor and Date is a date. For example, my data frame (called 'species') is set up like this: Date Taxa 2013-07-12 A 2011-08-31 B 2012-09-06 C 2012-05-17 A 2013-07-12 C 2012-09-07 B and I would be looking for an answer like this: Date Taxa 2012-05-17 A 2011-08-31 B 2012-09-06 C I tried using: t.first <- species[unique(species$Taxa),] and it gave me the correct number of rows but there were Taxa repeated. If I just use unique(species$Taxa) it appears to give me the right answer, but then I don't know the date when it first occurred. Thanks for any help.

    Read the article

  • Two pass blur shader using libgdx tile map renderer

    - by Alexandre GUIDET
    I am trying to apply the following technique: blur effect using two pass shader to my libgdx game using the OrthogonalTiledMapRenderer. The idea is to blur the background wich is also a tilemap but rendered with another camera with a different zoom applied. Here is a screen capture without effect: Using the OrthogonalTiledMapRenderer sprite batch like this: backgroundMapRenderer.getSpriteBatch().setShader(shaderBlurX); backgroundMapRenderer.render(layerBackground); I get the following render: Wich is ok for X blur pass. I then try using frame buffer object like in this example. But the effect seems to be too much zoomed: I may be messing up with the camera and the zoom factor. Here is the code: private ShaderProgram shaderBlurX; private ShaderProgram shaderBlurY; private int FBO_SIZE = 800; private FrameBuffer targetA; private FrameBuffer targetB; targetA = new FrameBuffer(Pixmap.Format.RGBA8888, FBO_SIZE, FBO_SIZE, false); targetB = new FrameBuffer(Pixmap.Format.RGBA8888, FBO_SIZE, FBO_SIZE, false); targetA.begin(); Gdx.gl.glClearColor(1, 1, 1, 0); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); backgroundMapRenderer.render(layerBackground); targetA.end(); targetB.begin(); Gdx.gl.glClearColor(1, 1, 1, 0); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); backgroundMapRenderer.getSpriteBatch().setShader(shaderBlurX); backgroundMapRenderer.render(layerBackground); targetB.end(); TextureRegion back = new TextureRegion(targetB.getColorBufferTexture()); back.flip(false, true); backgroundMapRenderer.getSpriteBatch() .setProjectionMatrix(backgroundCamera.combined); backgroundMapRenderer.getSpriteBatch().setShader(shaderBlurY); backgroundMapRenderer.getSpriteBatch().begin(); backgroundMapRenderer.getSpriteBatch().draw(back, 0, 0); backgroundMapRenderer.getSpriteBatch().end(); I know I am making something the wrong way, but I can't find any resources about applying two passes shader using tile map renderer. Does someone know how to achieve this?

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – When are Statistics Updated – What triggers Statistics to Update

    - by pinaldave
    If you are an SQL Server Consultant/Trainer involved with Performance Tuning and Query Optimization, I am sure you have faced the following questions many times. When is statistics updated? What is the interval of Statistics update? What is the algorithm behind update statistics? These are the puzzling questions and more. I searched the Internet as well many official MS documents in order to find answers. All of them have provided almost similar algorithm. However, at many places, I have seen a bit of variation in algorithm as well. I have finally compiled the list of various algorithms and decided to share what was the most common “factor” in all of them. I would like to ask for your suggestions as whether following the details, when Statistics is updated, are accurate or not. I will update this blog post with accurate information after receiving your ideas. The answer I have found here is when statistics are expired and not when they are automatically updated. I need your help here to answer when they are updated. Permanent table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in a table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Temporary table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in table is less than 6, statistics is updated for every 6 changes in table. If the number of rows in table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Table variable There is no statistics for Table Variables. If you want to read further about statistics, I suggest that you read the white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008. Let me know your opinions about statistics, as well as if there is any update in the above algorithm. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Deciding which technology to use is a big decision when no technology is an obvious choice

    Deciding which technology to use in a new venture or project is a big decision for any company when no technology is an obvious choice. It is always best to analyze the current requirements of the project, and also evaluate the existing technology climate so that the correct technology based on the situation at the time is selected. When evaluation the requirements of a new project it is best to be open to as many technologies as possible initially so a company can be sure that the right decision gets made. Another important aspect of the technology decision is what can the current network and  hardware environment handle, and what would be needed to be adjusted if a specific technology was selected. For example if the current network operating system is Linux then VB6 would force  a huge change in the current computing environment. However if the current network operation system was windows based then very little change would be needed to allow for VB6 if any change had to be done at all. Finally and most importantly an analysis should be done regarding the current technical employees pertaining to their skills and aspirations. For example if you have a team of Java programmers then forcing them to build something in C# might not be an ideal situation. However having a team of VB.net developers who want to develop something in C# would be a better situation based on this example because they are already failure with the .Net Framework and have a desire to use the new technology. In addition to this analysis the cost associated with building and maintaining the project is also a key factor. If two languages are ideal for a project but one technology will increase the budget or timeline by 50% then it might not be the best choice in that situation. An ideal situation for developing in C# applications would be a project that is built on existing Microsoft technologies. An example of this would be a company who uses Windows 2008 Server as their network operating system, Windows XP Pro as their main operation system, Microsoft SQL Server 2008 as their primary database, and has a team of developers experience in the .net framework. In the above situation Java would be a poor technology decision based on their current computing environment and potential lack of Java development by the company’s developers. It would take the developers longer to develop the application due the fact that they would have to first learn the language and then become comfortable with the language. Although these barriers do exist, it does not mean that it is not due able if the company and developers were committed to the project.

    Read the article

  • How to Quickly Cut a Clip From a Video File with Avidemux

    - by Trevor Bekolay
    Whether you’re cutting out the boring parts of your vacation video or getting a hilarious scene for an animated GIF, Avidemux provides a quick and easy way to cut clips from any video file. It’s overkill to use a full-featured video editing program if you just want to cut a few clips from a video file. Even programs that are designed to be small can have confusing interfaces when dealing with video. We’ve found that a great free program, Avidemux, makes the job of cutting clips extremely simple. Note: While the screenshots in this guide are taken from the Windows version, Avidemux runs on all of the major platforms – Windows, Mac OS X and Linux (GTK). Image by Keith Williamson. Cutting Clips from a Video File Open up Avidemux, and load the video file that you want to work with. If you get a prompt like this one: we recommend clicking Yes to use the safer mode. Find the portion of the video that you’d like to isolate. Get as close as you can to the start of the clip you want to cut. Once you find the start of your clip, look at the “Frame Type” of the current frame. You want it to read I; if it isn’t frame type I, then use the single left and right arrow buttons to go forward or backward one frame until you find an appropriate I frame. Once you’ve found the right starting frame, click the button with the A over a red bar. This will set the start of the clip. Advance to where you want your clip to end. Click on the button with a B when you’ve found the appropriate frame. This frame can be of any type. You can now save the clip, either by going to File –> Save –> Save Video… or by pressing Ctrl+S. Give the file a name, and Avidemux will prepare your clip. And that’s it! You should now have a movie file that contains only the portion of the original file that you want. Download Avidemux free for all platforms Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    This article is a continuation of my previous entry where I explained how OIF/IdP leverages OAM to authenticate users at runtime: OIF/IdP internally forwards the user to OAM and indicates which Authentication Scheme should be used to challenge the user if needed OAM determine if the user should be challenged (user already authenticated, session timed out or not, session authentication level equal or higher than the level of the authentication scheme specified by OIF/IdP…) After identifying the user, OAM internally forwards the user back to OIF/IdP OIF/IdP can resume its operation In this article, I will discuss how OIF/IdP can be configured to map Federation Authentication Methods to OAM Authentication Schemes: When processing an Authn Request, where the SP requests a specific Federation Authentication Method with which the user should be challenged When sending an Assertion, where OIF/IdP sets the Federation Authentication Method in the Assertion Enjoy the reading! Overview The various Federation protocols support mechanisms allowing the partners to exchange information on: How the user should be challenged, when the SP/RP makes a request How the user was challenged, when the IdP/OP issues an SSO response When a remote SP partner redirects the user to OIF/IdP for Federation SSO, the message might contain data requesting how the user should be challenged by the IdP: this is treated as the Requested Federation Authentication Method. OIF/IdP will need to map that Requested Federation Authentication Method to a local Authentication Scheme, and then invoke OAM for user authentication/challenge with the mapped Authentication Scheme. OAM would authenticate the user if necessary with the scheme specified by OIF/IdP. Similarly, when an IdP issues an SSO response, most of the time it will need to include an identifier representing how the user was challenged: this is treated as the Federation Authentication Method. When OIF/IdP issues an Assertion, it will evaluate the Authentication Scheme with which OAM identified the user: If the Authentication Scheme can be mapped to a Federation Authentication Method, then OIF/IdP will use the result of that mapping in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled If the Authentication Scheme cannot be mapped, then OIF/IdP will set the Federation Authentication Method as the Authentication Scheme name in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled Mappings In OIF/IdP, the mapping between Federation Authentication Methods and Authentication Schemes has the following rules: One Federation Authentication Method can be mapped to several Authentication Schemes In a Federation Authentication Method <-> Authentication Schemes mapping, a single Authentication Scheme is marked as the default scheme that will be used to authenticate a user, if the SP/RP partner requests the user to be authenticated via a specific Federation Authentication Method An Authentication Scheme can be mapped to a single Federation Authentication Method Let’s examine the following example and the various use cases, based on the SAML 2.0 protocol: Mappings defined as: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapped to LDAPScheme, marked as the default scheme used for authentication BasicScheme urn:oasis:names:tc:SAML:2.0:ac:classes:X509 mapped to X509Scheme, marked as the default scheme used for authentication Use cases: SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:X509 as the RequestedAuthnContext: OIF/IdP will authenticate the use with X509Scheme since it is the default scheme mapped for that method. SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the RequestedAuthnContext: OIF/IdP will authenticate the use with LDAPScheme since it is the default scheme mapped for that method, not the BasicScheme SP did not request any specific methods, and user was authenticated with BasisScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with LDAPScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with BasisSessionlessScheme: OIF/IdP will issue an Assertion with BasisSessionlessScheme as the FederationAuthenticationMethod, since that scheme could not be mapped to any Federation Authentication Method (in this case, the administrator would need to correct that and create a mapping) Configuration Mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. Authentication Schemes As discussed in the previous article, during Federation SSO, OIF/IdP will internally forward the user to OAM for authentication/verification and specify which Authentication Scheme to use. OAM will determine if a user needs to be challenged: If the user is not authenticated yet If the user is authenticated but the session timed out If the user is authenticated, but the authentication scheme level of the original authentication is lower than the level of the authentication scheme requested by OIF/IdP So even though an SP requests a specific Federation Authentication Method to be used to challenge the user, if that method is mapped to an Authentication Scheme and that at runtime OAM deems that the user does not need to be challenged with that scheme (because the user is already authenticated, session did not time out, and the session authn level is equal or higher than the one for the specified Authentication Scheme), the flow won’t result in a challenge operation. Protocols SAML 2.0 The SAML 2.0 specifications define the following Federation Authentication Methods for SAML 2.0 flows: urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocol urn:oasis:names:tc:SAML:2.0:ac:classes:Telephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:PersonalTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:Smartcard urn:oasis:names:tc:SAML:2.0:ac:classes:Password urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocolPassword urn:oasis:names:tc:SAML:2.0:ac:classes:X509 urn:oasis:names:tc:SAML:2.0:ac:classes:TLSClient urn:oasis:names:tc:SAML:2.0:ac:classes:PGP urn:oasis:names:tc:SAML:2.0:ac:classes:SPKI urn:oasis:names:tc:SAML:2.0:ac:classes:XMLDSig urn:oasis:names:tc:SAML:2.0:ac:classes:SoftwarePKI urn:oasis:names:tc:SAML:2.0:ac:classes:Kerberos urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport urn:oasis:names:tc:SAML:2.0:ac:classes:SecureRemotePassword urn:oasis:names:tc:SAML:2.0:ac:classes:NomadTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:AuthenticatedTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:SmartcardPKI urn:oasis:names:tc:SAML:2.0:ac:classes:TimeSyncToken Out of the box, OIF/IdP has the following mappings for the SAML 2.0 protocol: Only urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml20-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 2.0 An example of an AuthnRequest message sent by an SP to an IdP with the SP requesting a specific Federation Authentication Method to be used to challenge the user would be: <samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="https://idp.com/oamfed/idp/samlv20" ID="id-8bWn-A9o4aoMl3Nhx1DuPOOjawc-" IssueInstant="2014-03-21T20:51:11Z" Version="2.0">  <saml:Issuer ...>https://acme.com/sp</saml:Issuer>  <samlp:NameIDPolicy AllowCreate="false" Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"/>  <samlp:RequestedAuthnContext Comparison="minimum">    <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">      urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport </saml:AuthnContextClassRef>  </samlp:RequestedAuthnContext></samlp:AuthnRequest> An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                    urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> An administrator would be able to specify a mapping between a SAML 2.0 Federation Authentication Method and one or more OAM Authentication Schemes SAML 1.1 The SAML 1.1 specifications define the following Federation Authentication Methods for SAML 1.1 flows: urn:oasis:names:tc:SAML:1.0:am:unspecified urn:oasis:names:tc:SAML:1.0:am:HardwareToken urn:oasis:names:tc:SAML:1.0:am:password urn:oasis:names:tc:SAML:1.0:am:X509-PKI urn:ietf:rfc:2246 urn:oasis:names:tc:SAML:1.0:am:PGP urn:oasis:names:tc:SAML:1.0:am:SPKI urn:ietf:rfc:3075 urn:oasis:names:tc:SAML:1.0:am:XKMS urn:ietf:rfc:1510 urn:ietf:rfc:2945 Out of the box, OIF/IdP has the following mappings for the SAML 1.1 protocol: Only urn:oasis:names:tc:SAML:1.0:am:password is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml11-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 1.1 An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameID ...>[email protected]</saml:NameID>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Note: SAML 1.1 does not define an AuthnRequest message. An administrator would be able to specify a mapping between a SAML 1.1 Federation Authentication Method and one or more OAM Authentication Schemes OpenID 2.0 The OpenID 2.0 PAPE specifications define the following Federation Authentication Methods for OpenID 2.0 flows: http://schemas.openid.net/pape/policies/2007/06/phishing-resistant http://schemas.openid.net/pape/policies/2007/06/multi-factor http://schemas.openid.net/pape/policies/2007/06/multi-factor-physical Out of the box, OIF/IdP does not define any mappings for the OpenID 2.0 Federation Authentication Methods. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. An example of an OpenID 2.0 Request message sent by an SP/RP to an IdP/OP would be: https://idp.com/openid?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=checkid_setup&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.realm=https%3A%2F%2Facme.com%2Fopenid&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_request&openid.ax.type.attr0=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.if_available=attr0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.max_auth_age=0 An example of an Open ID 2.0 SSO Response issued by an IdP/OP would be: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will provide examples on how to configure OIF/IdP for the various protocols, to map OAM Authentication Schemes to Federation Authentication Methods.Cheers,Damien Carru

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeff
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • SQLAuthority News – Social Media Series – YouTube and Movies

    - by pinaldave
    Pinal Dave on Youtube! Some people might not know it, but YouTube is actually more than a place to watch funny cat videos and people singing their favorite pop songs – it’s actually a social media site.  When you are a member of YouTube you can follow people who regularly post videos, post video responses of your own, and even gain a following for your own videos.  I myself was not aware of YouTube’s potential until recently, when I started to make SQL Server in Sixty Seconds videos. YouTube is very different than other types of social media, and a big factor is that anyone can look at videos without being a member.  Unlike other social media sites, like Twitter and Facebook, you have to have an account in order to participate.  But on YouTube you are even more anonymous.  To make and post videos you need an account, but anyone who comes to the site can look at what you’ve made without signing in or leaving any trace of having seen your material.  This makes YouTube very anonymous and hard to track. However, we should not overlook the power of video on the internet.  Over the past few months I have been making SQL Server in Sixty Second videos and have come to love it.  It is very exciting to be able to talk about a subject that mostly I write about, and for many people video is far more accessible and easy to understand.   I have really enjoyed diving into something new, and would love to have more people check out these videos and give me feedback.  You can find me at www.youtube.com/user/pinaldave. I am very excited with all the possibilities on YouTube and it might just be the technology evangelist in me, but I would love for other people to discover how fun and exciting this site can be, too.  Don’t think of it as just a place to find funny videos and waste a few minutes of your time, think of it as a place to learn and interact with interesting people.  Come watch a few of my videos, while you’re there.  Remember, everything is free and there are no contracts to sign, but I hope that you get as excited as I am and join up.  We need more people creating good content on this site! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • Cloud Application Management for Platforms

    - by user756764
    Today Oracle, along with CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG, published the Cloud Application Management for Platforms (CAMP) specification. This spec deals with application management in the context of PaaS. It defines a model (consisting of a set resources and their relationships), a REST-based API for manipulating that model, and a packaging format for getting applications (and their attendant metadata) into and out of the platform. My colleague, Mark Carlson, has already provided an excellent writeup on the spec here. The following, additional points bear emphasizing: CAMP is language, framework and platform neutral; it should be equally applicable to the task of deploying and managing Ruby on Rails applications as Java/Spring applications (as Node.js applications, etc.) CAMP only covers the interactions between a Cloud Consumer and a Cloud Provider (using the definitions of these terms provided in the NIST Cloud Computing Reference Architecture). The internal APIs used by the Cloud Provider to, for example, deploy additional platform services (e.g. a new message queuing service) are out of CAMP's scope. CAMP supports the management of the entire lifecycle of the application (e.g. start/stop, suspend/resume, etc.) not just the deployment of the components that make up the application. Complexity is the antithesis of interoperability. One of CAMP's goals is to be as broadly interoperable as possible. To this end, the authors of CAMP tried to "make things as simple as possible, but no simpler". For example, JSON is the only serialization format used in the spec (although Providers can extend this to support additional serialization formats such as XML). It remains to be seen whether we can preserve this simplicity as the spec is processed by OASIS. So far, those who have indicated an interest in collaborating on the spec seem to be of a like mind with regards to the need for simplicity. The flip side to simplicity is the knowledge that you undoubtedly missed something that is important to someone. To make up for this, CAMP is designed to be extensible. The idea is to ship what we know will work, allow implementers to extend the spec, then re-factor the spec to incorporate the most popular extensions. Anyone interested in this effort, particularly those of you using PaaS-level services, is encouraged to join the forthcoming OASIS TC. As you may have noticed, CAMP is a bit of a departure from some of the more monolithic management standards that have preceded it. The idea is to develop simple, discrete standards targeted to address specific interoperability and portability problems and tie these standards together with common patterns based on REST and HATEOAS. I'm excited to see how this idea plays out.

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • Does Scrum turn active developers into passive developers?

    - by Saeed Neamati
    I'm a web developer working in a team of three developers and one designer. It's now about five months that we've implemented the agile scrum software development methodology. But I have a weird feeling I just wanted to share in this site. One important factor in human life is decision-making process. However, there is a big difference in decisions you make. Some decisions are just the outcome of an internal or external force, while other decisions are completely based on your free will, and some decisions are simply something in between. The more freedom you have in making decisions, the more self-driven your work would become. This seems to be a rule. Because we tend to shape our lives ourselves. There is a big difference between you deciding what to do, or being told what to do. Before scrum, I felt like having more freedom in making the decisions which were related to development, analysis, prioritizing implementation, etc. I had more feeling like I'm deciding what I'm doing. However, due to the scrum methodology, now many decisions simply come from the product owner. He prioritizes PBIs, he analyzes how the software should work, even sometimes how the UI and functionality should be implemented. I know that this is part of the scrum methodology, and I also know that this may result in better sales of product in future. However, I now feel like I'm always getting told to do something, instead of deciding to do something. This syndrome now has made me more passive towards the work. I tend to search less to find a better solution, approach, or technique I don't wake up in the morning expecting to get to an enjoyable work. Rather, I feel like being forced to work in order to live I have more hunger to work on my own hobby projects after work I won't push the team anymore to get to the higher technological levels I spend more time now on dinner, or tea-times and have less enthusiasm to get back to work I'm now willing more for the work to finish sooner, so that I can get home The big problem is, I see and diagnose this behavior in my colleagues too. Is it the outcome of scrum? Does scrum really makes the development team feel like they have no part in forming the overall software, thus making the passive to the project? How can I overcome this feeling?

    Read the article

  • SpaceX’s Falcon 9 Launch Success And Reusable Rockets Test Partially Successful

    - by Gopinath
    Elon Musk’s SpaceX is closing on the dream of developing reusable rockets and likely in an year or two space launch rockets will be reusable just like flights, ships and cars. Today SpaceX launched an upgraded Falcon 9 rocket in to space to deliver satellites as well as to test their reusable rocket launching technology. All on board satellites were released on to the orbit and the first stage of rocket partially succeeded in returning back to Earth. This is a huge leap in space technology.   Couple of years ago reusable rockets were considered as impossible. NASA, Russian Space Agency, China, India or for that matter any other space agency never even attempted to build reusable rockets. But SpaceX’s revolutionary technology partially succeeded in doing the impossible! Elon Musk founded SpaceX with the goal of building reusable rockets and transporting humans to & from other planets like Mars. He says If one can figure out how to effectively reuse rockets just like airplanes, the cost of access to space will be reduced by as much as a factor of a hundred.  A fully reusable vehicle has never been done before. That really is the fundamental breakthrough needed to revolutionize access to space. Normally the first stage of a rocket falls back to Earth after burning out and is destroyed. But today SpaceX reignited first stage rocket after its separation and attempted to descend smoothly on to ocean’s surface. Though it did not fully succeed, the test was partially successful and SpaceX was able to recovers portions of first stage. Rocket booster relit twice (supersonic retro & landing), but spun up due to aero torque, so fuel centrifuged & we flamed out — Elon Musk (@elonmusk) September 29, 2013 With the partial success of recovering first stage, SpaceX gathered huge amount of information and experience it can use to improve Falcon 9 and build a fully reusable rocket. In post launch press conference Musk said if things go "super well", could refly a Falcon 9 1st stage by the end of next year. Falcon 9 Launch Video Next reusable first tests delayed by at least two launches SpaceX has a busy schedule for next several months with more than 50 missions scheduled using the new Falcon 9 rocket. Ten of those missions are to fly cargo to the International Space Shuttle for NASA.  SpaceX announced that they will not attempt to recover the first stage of Falcon 9 in next two missions. The next test will be conducted on  the fourth mission of Falcon 9 which is planned to carry cargo to Internation Space Station sometime next year. This will give time required for SpaceX to analyze the information gathered from today’s mission and improve first stage reentry systems. More reading Here are few interesting sources to read more about today’s SpaceX launch SpaceX post mission press conference details and discussion on Reddit Giant Leaps for Space Firms Orbital, SpaceX Hacker News community discussion on SpaceX launch SpaceX Launches Next-Generation Private Falcon 9 Rocket on Big Test Flight

    Read the article

  • Digital Storage for Airline Entertainment

    - by Bill Evjen
    by Thomas Coughlin Common flash memory cards The most common flash memory products currently in use are SD cards and derivative products (e.g. mini and micro-SD cards) Some compact flash used for professional applications (such as DSLR cameras) Evolution of leading flash formats Standardization –> market expansion Market expansion –> volume iNAND –> focus is on enabling embedded X3 iSSD –> ideal for thin form factor devices Flash memory applications Phones are the #1 user of flash memory Flash memory is used as embedded and removable storage in many mobile applications Flash memory is being used in computers as USB sticks and SSDs Possible use of flash memory in computer combined with HDDs (hybrid HDDs and paired or dual storage computers) It can be a removable card or an embedded card These devices can only handle a specific number of writes Flash memory reads considerably quicker than hard drives Hybrid and dual storage in computers SSDs can provide fast performance but they are expensive HDDs can provide cheap storage but they are relatively slow Combining some flash memory with a HDD can provide costs close to those of HDDs and performance close to flash memory Seagate Momentus XT hybrid HDD Various dual storage offerings putting flash memory with HDDs Other common flash memory devices USB sticks All forms and colors Used for moving files around Some sold with content on them (Sony Movies on USB sticks) Solid State Drives (SSDs) Floating Gate Flash Memory Cell When a bit is programmed, electrons are stored upon the floating gate This has the effect of offsetting the charge on the control gate of the transistor If there is no charge upon the floating gate, then the control gate’s charge determines whether or not a current flows through the channel A strong charge on the control gate assumes that no current flows. A weak charge will allow a strong current to flow through. Similar to HDDs, flash memory must provide: Bit error correction Bad block management NAND and NOR memories are treated differently when it comes to managing wear In many NOR-based systems no management is used at all, since the NOR is simply used to store code, and data is stored in other devices. In this case, it would take a near-infinite amount of time for wear to become an issue since the only time the chip would see an erase/write cycle is when the code in the system is being upgraded, which rarely if ever happens over the life of a typical system. NAND is usually found in very different application than is NOR Flash memory wears out This is expected to get worse over time Retention: Disappearing data Bits fade away Retention decreases with increasing read/writes Bits may change when adjacent bits are read Time and traffic are concerns Controllers typically groom read disturb errors Like DRAM refresh Increases erase/write frequency Application characteristics Music – reads high / writes very low Video – r high / writes very low Internet Cache – r high / writes low On airplanes Many consumers now have their own content viewing devices – do they need the airlines? Is there a way to offer more to consumers, especially with their own viewers Additional special content tie into airplane network access to electrical power, internet Should there be fixed embedded or removable storage for on-board airline entertainment? Is there a way to leverage personal and airline viewers and content in new and entertaining ways?

    Read the article

  • Not Dead, Just Busy

    - by MOSSLover
    So I didn’t die in a freak smelting accident yet, but I have been dealing with a lot of different things.  I had to take a bit of a break to deal with the cat death issue.  I am not fully recovered, because well it just happened a few months ago.  It kind of sucked.  Plus the apartment feels a lot bigger. Then you have the whole New York Comic Con thing where I had to plan some cosplay costumes.  I have been trying to find time to hang out with friends and have a social life.  That plus I built an entire presentation for iOS development for New York Code Camp.  I am also planning a couple MS Community dinners (namely one a week from Tuesday) plus a give camp.  I am also planning a vacation around SPS UK plus I will be at SPC.  Life is just incredibly hectic and when you factor in dating to the mix it’s gotten insane to the point where some day I just have to go dark.  Hence the lack of blogging.  I am just trying to keep up with everything and everyone without losing myself. If you guys will be at SPC or SPS UK I will be at both places this year.  Stop by the Planet Technologies booth and see me or I’ll be around somewhere.  I am really sorry if I don’t remember you from an event or if you are someone following me on twitter.  I am trying to get better at the mnemonic memory devices, but I think things broke down around the 47th event I attended or spoke at or something to that nature.  If anyone wants to talk to Cathy, Lori, or I about Women in SharePoint definitely find us at the event.  Anyway good night and good luck guys.  I promise to check back at least once before the year ends.  In the meantime twitter stalking is always possible.  Sometimes I even respond back. Technorati Tags: SPC,SPS UK,NYCC,NYC Code Camp,MOSSLover

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >