Search Results

Search found 1785 results on 72 pages for 'a round tuit'.

Page 2/72 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Silverlight - round doubles away from zero

    - by Cornel
    In Silverlight the Math.Round() method does not contain an overload with 'MidpointRounding' parameter. What is the best approach to round a double away from zero in Silverlight in this case? Example: Math.Round(1.4) = 1 Math.Round(1.5) = 2 Math.Round(1.6) = 2

    Read the article

  • round() for float in C++

    - by Roddy
    I need a simple floating point rounding function, thus: double round(double); round(0.1) = 0 round(-0.1) = 0 round(-0.9) = -1 I can find ceil() and floor() in the math.h - but not round(). Is it present in the standard C++ library under another name, or is it missing??

    Read the article

  • SQL SERVER – Function to Round Up Time to Nearest Minutes Interval

    - by pinaldave
    Though I have written more than 2300 blog posts, I always find things which I have not covered earlier in this blog post. Recently I was asked if I have written a function which rounds up or down the time based on the minute interval passed to it. Well, not earlier but it is here today. Here is a very simple example of how one can do the same. ALTER FUNCTION [dbo].[RoundTime] (@Time DATETIME, @RoundToMin INT) RETURNS DATETIME AS BEGIN RETURN ROUND(CAST(CAST(CONVERT(VARCHAR,@Time,121) AS DATETIME) AS FLOAT) * (1440/@RoundToMin),0)/(1440/@RoundToMin) END GO Above function needs two values. 1) The time which needs to be rounded up or down. 2) Time in minutes (the value passed here should be between 0 and 60 – if the value is incorrect the results will be incorrect.) Above function can be enhanced by adding functionalities like a) Validation of the parameters passed b) Accepting values like Quarter Hour, Half Hour etc. Here are few sample examples. SELECT dbo.roundtime1('17:29',30) SELECT dbo.roundtime1(GETDATE(),5) SELECT dbo.roundtime1('2012-11-02 07:27:07.000',15) When you run above code, it will return following results. Well, do you have any other way to achieve the same result? If yes, do share it here and I will be glad to share it on blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Bargain Hunter Round Up – Kicking Off The E-Commerce Holiday Season

    - by Jeri Kelley
    Everyone has a different way to tackle holiday shopping – Black Friday, Small Business Saturday, Cyber Monday, some have it done months in advance, and others wait until the very last minute.   For me, I’m not big into massive crowds so online shopping to the rescue.   Others thrive on the energy of being in the stores on the busiest shopping day of the year.  With last weekend marking the official kick-off to the holiday season, I thought I’d provide a round up of what’s trending:   Online numbers are looking up: According to comScore, for the holiday season-to-date, $16.4 billion has been spent online, marking a 16-percent increase versus the corresponding days last year. Thanksgiving Day – Why wait until Black Friday or Cyber Monday: Online shopping on Thanksgiving Day also increased, totaling $633 million in receipts, a 32 percent increase over Thanksgiving 2011 Black Friday – More than just in-store: Bargain hunters spent $1.042 billion online the day after Thanksgiving, a 26 percent increase of last year's Black Friday, according to new figures released today by market analyst ComScore Cyber Monday Week: Cyber Monday reached $1.465 billion in online spending, up 17 percent versus year ago, representing the heaviest online spending day in history and the second day this season (in addition to Black Friday) to surpass $1 billion in sales                 Cyber Monday is now being dubbed Cyber Week:  “The annual event is increasingly becoming Cyber Week instead of a one-day event as retailers open their arms for Americans who prefer to avoid crowds and compare prices online.” But, Cyber Monday continues its importance, driving a nearly 22% increase in year-over-year (YoY) online sales. Monday sales beat Sunday, the next highest day by a margin of 26.7%. Mobile shopping continues to rise: ChannelAdvisor that said mobile shopping made up 32% of all online spending over the Black Friday weekend Mobile devices were a key part of the online shopping craziness that was November 26th.  Sales from smartphones and tablets doubled this year. I n tablets the growth was 110% and in smartphones - 100% Mobile bar code scans on Black Friday increased 50 percent, according to a report from ScanLife For more on how you can be ready for the holiday season, check out my blog post on commerce strategies for the holidays.

    Read the article

  • SCVMM – Round 2 – How to create a Private Cloud using PowerShell

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2013/06/28/scvmm--round-2--how-to-create-a-private.aspxHave you ever seen "A Bridge too far" movie? To not to wake up a click too far, it is good to script some tasks. Yes of course we can follow wizards, but some of us want to be warriorsJ. A small tip, take a look on credentials and system GUID examples. I don't know how about you, but for me it will be really useful in the future.    # credents$credential = Get-CredentialNew-SCRunAsAccount -Name "TESTDOMAIN\Administrator" -Credential $credential #storage $opsMgrServerCredential = Get-SCRunAsAccount -Name "TESTDOMAIN\Administrator"New-SCStorageClassification -Name "Bronze" -Description "" –RunAsynchronouslyNew-SCStorageClassification -Name "Silver" -Description "" –RunAsynchronouslyNew-SCStorageClassification -Name "Gold" -Description "" –RunAsynchronously # add a shared storageFind-SCComputer -ComputerName "dc.TESTDOMAIN.net"Add-SCStorageProvider -AddWindowsNativeWmiProvider -Name "dc.TESTDOMAIN.net" -RunAsAccount $opsMgrServerCredential -ComputerName "dc.TESTDOMAIN.net"$fileServer = Get-SCStorageFileServer "dc.TESTDOMAIN.net"$fileShares = @()$fileShares += Get-SCStorageFileShare -Name "VMMLibrary"Set-SCStorageFileServer -StorageFileServer $fileServer -AddStorageFileShareToManagement $fileShares –RunAsynchronously #fabric network$logicalNetwork = New-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" -LogicalNetworkDefinitionIsolation $false -EnableNetworkVirtualization $true -UseGRE $true -IsPVLAN $false$allHostGroups = @()$allHostGroups += Get-SCVMHostGroup -Name "All Hosts"$allSubnetVlan = @()$allSubnetVlan += New-SCSubnetVLan -Subnet "10.0.0.0/24" -VLanID 0New-SCLogicalNetworkDefinition -Name "TESTDOMAIN-Service-Network_0" -LogicalNetwork $logicalNetwork -VMHostGroup $allHostGroups -SubnetVLan $allSubnetVlan #IP pool$logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network"$logicalNetworkDefinition = Get-SCLogicalNetworkDefinition -LogicalNetwork $logicalNetwork -Name "TESTDOMAIN-Service-Network_0" # Gateways$allGateways = @()$allGateways += New-SCDefaultGateway -IPAddress "10.0.0.1" –Automatic# DNS servers $allDnsServer = @("10.0.0.1")# DNS suffixes$allDnsSuffixes = @("TESTDOMAIN.net")# WINS servers$allWinsServers = @()New-SCStaticIPAddressPool -Name "TESTDOMAIN-Service-Network" -LogicalNetworkDefinition $logicalNetworkDefinition -Subnet "10.0.0.0/24" -IPAddressRangeStart "10.0.0.51" -IPAddressRangeEnd "10.0.0.75" -DefaultGateway $allGateways -DNSServer $allDnsServer -DNSSuffix "" -DNSSearchSuffix $allDnsSuffixes –RunAsynchronously #Hyper-V Virtual Networks$logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" $vmNetwork = New-SCVMNetwork -Name "TESTDOMAIN-VMN" -LogicalNetwork $logicalNetwork -IsolationType "WindowsNetworkVirtualization" -CAIPAddressPoolType "IPV4" -PAIPAddressPoolType "IPV4"Write-Output $vmNetwork$subnet = New-SCSubnetVLan -Subnet "10.0.0.0/24"New-SCVMSubnet -Name "Con-SN" -VMNetwork $vmNetwork -SubnetVLan $subnet # bind VLAN with the Network Adapter$vmHost = Get-SCVMHost -ComputerName "VMM01.TESTDOMAIN.net"$vmHostNetworkAdapter = Get-SCVMHostNetworkAdapter -VMHost $vmHost #-Name "Intel 21140-Based PCI Fast Ethernet Adapter (Emulated)"Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmHostNetworkAdapter -Description "" -AvailableForPlacement $true -UsedForManagement $true $logicalNetwork = Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network" Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmHostNetworkAdapter -AddOrSetLogicalNetwork $logicalNetworkSet-SCVMHost -VMHost $vmHost -RunAsynchronously -NumaSpanningEnabled $true #Create a Private Cloud$Guid = [System.Guid]::NewGuid()Set-SCCloudCapacity -JobGroup $Guid -UseCustomQuotaCountMaximum $false -UseMemoryMBMaximum $false -UseCPUCountMaximum $false -UseStorageGBMaximum $false -UseVMCountMaximum $false -CustomQuotaCount 10 -MemoryMB 10240 -CPUCount 10 -StorageGB 386 -VMCount 10$resources = @()$resources += Get-SCLogicalNetwork -Name "TESTDOMAIN-Service-Network"$resources += Get-SCLoadBalancer -Manufacturer "Microsoft"$readonlyLibraryShares = @()$readonlyLibraryShares += Get-SCLibraryShare | where { $_.LibraryServer.Name -eq "dc.TESTDOMAIN.net" -and $_.Name -eq "VMMLibrary" }$addCapabilityProfiles = @()$addCapabilityProfiles += Get-SCCapabilityProfile -Name "Hyper-V"$Guid2 = [System.Guid]::NewGuid()Set-SCCloud -JobGroup $Guid2 -RunAsynchronously -AddCloudResource $resources -AddReadOnlyLibraryShare $readonlyLibraryShares -AddCapabilityProfile $addCapabilityProfiles$hostGroups = @()$hostGroups += Get-SCVMHostGroup -Name "TESTDOMAIN"New-SCCloud -VMHostGroup $hostGroups -Name "TESTDOMAIN-Cloud" -Description "" –RunAsynchronously

    Read the article

  • round number in JavaScript to N decimal places

    - by Richard
    in JavaScript, the typical way to round a number to N decimal places is something like: function round_number(num, dec) { return Math.round(num * Math.pow(10, dec)) / Math.pow(10, dec); } However this approach will round to a maximum of N decimal places while I want to always round to N decimal places. For example "2.0" would be rounded to "2". Any ideas?

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • Conferences: Starting the round for 2011 with Mix

    - by Enrique Lima
    There are several conferences lining up for 2011.  There are some private conferences I will be participating in and some other where there is an invitation to submit content for consideration.  That is the case with Mix 2011. The date:  April 12-14, 2011 The venue: Mandalay Bay, Las Vegas Here is the general information: http://live.visitmix.com/ To submit content: http://live.visitmix.com/opencall

    Read the article

  • OAGi Architecture Council OAGIS Ten Work Group Completes first round review of Concepts for OAGIS Te

    - by michael.rowell
    Today the OAGi Architecture Council OAGIS Ten Work group completed the first level review of concepts for existing content for OAGIS Ten. This is one of the first milestones for OAGIS Ten. In doing this the concepts of key objects (the Nouns) have been identified along with the key context for their use. While OAGIS Ten remains a work-in-process the work group shows progress. Going forward the other councils will provide additional input to these and there own concepts and the contexts for each. Additionally, sub groups will focus on concepts for given domains. Stay tuned for future progress. If anyone is interested in joining the effort. OAGi membership is open to anyone, please see the OAGi Web site.

    Read the article

  • Round-up: Embedded Java posts and videos

    - by terrencebarr
    I’ve been collecting links to some interesting blog posts and videos related to embedded Java over the last couple of weeks. Passing  these on here: Freescale blog – The Embedded Beat: “Let’s make it real – Internet of Things” Simon Ritter’s blog: “Mind Reading with Raspberry Pi” NightHacking with Steve Chin and Terrence Barr: “Java in the Internet of Things” NightHacking with Steve Chin and Alderan Robotics: “The NAO Robot” Java Magazine: “Getting Started with Java SE for embedded devices on Raspberry Pi” OTN video interview: “Java at ARM TechCon” OPN Techtalk with MX Entertainment: “Using Java and MX’s GrinXML Framework to build Blu-ray Disc and media applications” Oracle PartnerNetwork Blog: “M2M Architecture: Machine to Machine – The Internet of Things – It’s all about the Data” YouTube Java Channel: “Understanding the JVM and Low Latency Applications” Cheers, – Terrence Filed under: Mobile & Embedded Tagged: blog, iot, Java, Java Embedded, Raspberry Pi, video

    Read the article

  • Round-up: Embedded Java posts and videos

    - by terrencebarr
    I’ve been collecting links to some interesting blog posts and videos related to embedded Java over the last couple of weeks. Passing  these on here: Freescale blog – The Embedded Beat: “Let’s make it real – Internet of Things” Simon Ritter’s blog: “Mind Reading with Raspberry Pi” NightHacking with Steve Chin and Terrence Barr: “Java in the Internet of Things” NightHacking with Steve Chin and Alderan Robotics: “The NAO Robot” Java Magazine: “Getting Started with Java SE for embedded devices on Raspberry Pi” OTN video interview: “Java at ARM TechCon” OPN Techtalk with MX Entertainment: “Using Java and MX’s GrinXML Framework to build Blu-ray Disc and media applications” Oracle PartnerNetwork Blog: “M2M Architecture: Machine to Machine – The Internet of Things – It’s all about the Data” YouTube Java Channel: “Understanding the JVM and Low Latency Applications” Cheers, – Terrence Filed under: Mobile & Embedded Tagged: blog, iot, Java, Java Embedded, Raspberry Pi, video

    Read the article

  • October 2013 Oracle University Round-Up: New Training & Certifications

    - by Breanne Cooley
    Here are the highlights of what is happening this month at Oracle University.  New Technology Overview Courses: Cloud, Big Data and Security Learn about the latest technology solutions that can transform your business. These three Training On Demand courses are taught by industry experts. These courses help you develop an understanding of how Oracle technologies can make a positive impact on your organization.  Oracle Cloud Overview  Oracle Big Data Overview Oracle Security Overview  New Cloud Application Foundation Courses Check out our brand new 12c courses for WebLogic Server administrators and Coherence developers:  Oracle WebLogic Server 12c: Administration I Oracle WebLogic Server 12c: Administration II Oracle Coherence 12c: New Features  Oracle Database 12c Courses Our Oracle Database 12c training is becoming very popular. Here are this month's featured courses:  Oracle Database 12c: New Features for Administrators Oracle Database 12c: Administration Workshop  Oracle Database 12c: Install and Upgrade Workshop Oracle Database 12c: Admin, Install and Upgrade Accelerated  Validate your expertise and add value by earning an Oracle Database 12c Certification.  New Certifications for MySQL Watch our two new videos to find out what's new with Oracle MySQL Certifications. 1) Oracle MySQL 5.6 Certification: What's New for Database Administrators  Recommended training:  MySQL for Beginners MySQL for Database Administrators  2) Oracle MySQL 5.6 Certification: What's New for Developers Recommended training:  MySQL for Beginners MySQL for Developers New Training & Certification for Oracle Applications JD Edwards 9.1 Training Additional JD Edwards Enterprise One 9.1 training is now available for administrators, developers and implementation team members. Cross Application Training  JD Edwards Enterprise One Common Foundation Rel 9.x  Human Capital Management Training  JD Edwards EnterpriseOne Payroll for Canada Rel 9.x JD Edwards EnterpriseOne Payroll for US Rel 9.x JD Edwards EnterpriseOne Payroll Accelerated for Canada Rel 9.x JD Edwards EnterpriseOne Payroll Accelerated for US Rel 9.x  Financial  Management Training  JD Edwards EnterpriseOne Accounts Receivable Rel 9.x JD Edwards EnterpriseOne Financial Report Writing Rel 9.x  Knowledge Management 8.5 Training Oracle Knowledge 8.5 training is now available for analysts interested in learning how to quickly spot trends in content processing and system usage with analytics dashboards. Knowledge Analytics Rel 8.5  Taleo Training Updated Taleo training is now available. Taleo Business Edition (TEE) business users can learn how to create more efficient reports. Recruiters will learn how to efficiently and effectively use Taleo Business Edition (TBE) Recruit.  Taleo (TEE): Advanced Reporting Taleo (TBE): Recruit - End User Fundamentals  New Training for Oracle Retail 13.4.1 Updated training for Retail Predictive Application Server and Retail Demand Forecasting is now available.  RPAS Administration and Configuration Fundamentals RPAS Technical Essentials: Fusion Client 13.4.1 Retail Demand Forecasting (RDF) Business Essentials 13.4.1  View all available training courses, learning paths and certifications at education.oracle.com, or contact your local education representative to learn more about Oracle University's education solutions. See you in class!  -Oracle University Marketing Team 

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • T-SQL Tuesday #006 Round-up!

    - by Mike C
    T-SQL Tuesday this month was all about LOB (large object) data. Thanks to all the great bloggers out there who participated! The participants this month posted some very impressive articles with information running the gamut from Reporting Services to SQL Server spatial data types to BLOB-handling in SSIS. One thing I noticed immediately was a trend toward articles about spatial data (SQL Server 2008 Geography and Geometry data types, a very fun topic to explore if you haven’t played around with...(read more)

    Read the article

  • B2B Commerce Best Practice Round Table

    - by Jeri Kelley
    Are you struggling with delivering customers a consistent B2B multi-channel commerce experience? If yes, then you will want to join us for a panel discussion featuring Oracle customers and B2B commerce experts on Thursday, September 27th to learn how leading B2B companies are succeeding in the new age of commerce. Topics of discussion will include: Moving B2B data and content online Multiple site management Mobile platforms Merchandising and personalization Don’t miss this opportunity to learn more about the latest trends, challenges and successes in B2B multi-channel commerce. Learn more and register!

    Read the article

  • B2B Commerce Best Practice Round Table

    - by Jeri Kelley
    Are you struggling with delivering customers a consistent B2B multi-channel commerce experience? If yes, then you will want to join us for a panel discussion featuring Oracle customers and B2B commerce experts on Thursday, September 27th to learn how leading B2B companies are succeeding in the new age of commerce. Topics of discussion will include: Moving B2B data and content online Multiple site management Mobile platforms Merchandising and personalization Don’t miss this opportunity to learn more about the latest trends, challenges and successes in B2B multi-channel commerce. Learn more and register!

    Read the article

  • There are 2 jobs available - which one sounds better all round [closed]

    - by Steve Gates
    I am currently employed at a company where we scrape by each year breaking even, sometimes having a little profit. The development environment is very relaxed and we have a laugh. My colleagues are not interested in improving their knowledge unless they have to, so trying to get them to adopt things like TDD is a non-starter. My development manager is stuck in .Net 2 land and refuses to use things like LINQ. He over complicates architecture and writes very unreadable code, heres an example SortedList<int,<SortedList<int,SortedList<int, MyClass>>>> The MD of the company has no drive and lets the one sales guy bring in the contracts. We are not busy all the time and this allows me time to look at new technology and learn. In terms of using things like TDD, my development manager has no problem with it and can kind of see the purpose of it, he just wont use it himself. This means I am alone in learning new things and am often resorting to StackOverflow to make sure I get things right. The company has a lot of flexibility, I can work from home if needs be and when my daughter was born they let me work from home 1 day a week however they expect this flexibility in return often asking me to travel occasionally on a Friday afternoon for the following week. Sometimes its abroad. We are also pretty much on call 24/5 as we have engineers in various countries. Also we have no testers so most of the testing is done by us developers and some testing by engineers. Either way no-one likes testing! I have been offered a role at a company I worked at 5 years ago. They were quite Victorian in their working practices but it appears to have relaxed now although I suspect still reasonably formal. There is a new team of developers I don't know and they are about to move to new offices. The team lead is a guy that was there when I was and I get the impression he takes his role seriously and likes his formal procedures and documentation. I think some of the Victorian practices may have rubbed off on him. However he did say if things crop up then as long as I can trust the person they can work at home although he prefers people in the office. The team uses SCRUM, TDD and SOLID design principles so they are quite up to date in technology. They are reasonably Microsoft focused. It appears the Technical Director might be the R&D man and research new technology on his own not allowing developers to play with new technology. He possibly might be a super developer and makes all the decisions that no can argue with. They are currently moving to Entity Framework away from NHibernate based on issues that their queries seem to fail sometimes and they feel NHibernate is stagnant. They have analysts and a QA team. The MD is focused and they are an expanding company making profit each year. I'm not sure what the team morale is and whether they have a laugh. When I had a tour around the office they were there in dead silence. I'm really unsure which role is the best for me and going with my gut instinct is useless as I'm not sure what my gut is telling me. Based on the information above which role would you choose and why?

    Read the article

  • Week in Geek: Google Announces New Round of Services to be Shut Down

    - by Asian Angel
    Our latest edition of WIG is filled with news link coverage on topics such as an IE flaw allows attackers and advertisers to track cursor movement, Microsoft will retire its Live Mesh PC-sync service in February, Yahoo has revamped its e-mail service & continues overhaul on Flickr, and more. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Blogging Round the World

    It seems that once or twice a week, I run across an Android-developer-oriented site that I hadn’t previously noticed. There are already a few aggregators and directories, and...

    Read the article

  • PHP Round function

    - by Steve
    Is it possible to round a number where if it's 5 , just leave it , anything below 5 round down , anything above 5 round up? EX: 5 * 1.35 = 6.75 .... leave it 5.2 * 1.35 = 7.02 .... 7.00 5.5 * 1.35 = 7.56 .... 8.00 I've formatted with round($n,0, PHP_ROUND_HALF_UP) where $n is the product from the above calc , which leaves 6.75 but returns 7.02 for the next one. I also tried round($n,-1, PHP_ROUND_HALF_UP) which gives me the 7.00 on the second calc but then of course won't return a 6.75 for the first, instead it returns 680. This is a ticket markup calculation where the user enters the first number and is multiplied by the second. I actually remove the decimal because they don't want to see it, and they want this sort of customized rounding on the result. Thanks, Steve

    Read the article

  • Round-Robin DNS in mobile networks

    - by k7k0
    After reading load distribution alternatives and giving my limited skills on the area I'm biased toward round-robin DNS strategy. From what I understood, one key aspect of DNS Round-Robin is setting a low TTL value, avoiding caching. My main concern is that all my traffic comes from mobile networks, almost 30% of that comes from t-mobile 3G. Some questions: 1) Is there a chance that almost all clients on the same mobile network will be redirected to the same IP in the TTL frame? That would kill the distribution technique. 2) If I choose a really low TTL (zero or one). That impacts directly over client performance? It does a DNS miss every time or it's a setting that only impacts on DNS servers? Any help would be much appreciated. Thanks

    Read the article

  • Excel: ROUND & MOD giving me strange DATE results

    - by Mike
    This is sort of related to a previous question. My formula, which seemed to work fine yesterday now gives strange results. Today is the 30th of March (30/03/10). It's 10:11am on the clock that the computer is using for the time stamp in the NOW() part of my worksheet. Below is the formula and a screen shot of the results/columns. QUESTION: Why ddoes it show 1/2 day, and also where does 23 1/2 come from? The NOW() is in a hidden column (F2)...which I forgot to unhide before I took the screen shot. =IF(ISBLANK(I2),ROUND(MOD(H2-F2,24),2),ROUND(MOD(I2-F2,24),2)) Thanks Mike

    Read the article

  • Round prices up to nearest 5 after conversion in oscommerce

    - by Rhyso
    Hi there, A conversion question relating to prices in oscommerce: I am needing for a custom currency conversion to round the USD prices up to the nearest 5$ to avoid prices being displayed at silly prices such as $263. I am trying to convert to an int and round the following line : $curr-display_price($listing['products_price'], tep_get_tax_rate($listing['products_tax_class_id'])); ( as for some reason the price is displayed as a string, im guessing to include the currency sign) However not having much luck. Does anybody know where the root conversion takes place as it might be easier for me to round() or ceil() from there when it is a raw integer Or any other ideas of how I can round the conversion? Thanks for any help Rhys Thomas

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >