Search Results

Search found 17945 results on 718 pages for 'last fm'.

Page 110/718 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Error Using 32 vs. 64 bit SharePoint 2007 DLLs with PowerShell

    - by Brian Jackett
    Next time you fire up PowerShell to work with the SharePoint API make sure you launch the proper bit version of PowerShell.  Last week I had an interesting error that led to this blog post.  Travel back in time a little bit with me to see where this 32 vs. 64 bit debate started. History     Ever since the first pre-beta bits of Office 2010 landed in my lap I have been questioning whether it’s better to run 32 or 64 bit applications on a 64 bit host operating system.  In relation to Office 2010 I heard a number of arguments for 32 bit including this link from the Office 2010 Engineering team.  Given my typical usage scenarios 32 bit seemed the way to go since I wasn’t a “super RAM hungry” Excel user or the like. The Problem     Since I had chosen 32 bit Office 2010, I tried to stick with 32 bit version of other programs that I run assuming the same benefits and rules applied to other applications.  This is where I was wrong.  Last week I was attempting to use 32 bit PowerShell ISE (Integrated Scripting Environment) on a 64 bit WSS 3.0 server.  When trying to reference the 64 bit SharePoint DLLs I got the following errors about not being able to find the web application.     I have run into these errors when I have hosts file issues or improper permissions to the farm / site collection but these were not the case.  After taking a quick spin around the interwebs I ran across the below forum post comment and another MSDN forum reply that explained the error.  Turns out that sometimes it’s not possible to run 32 bit applications against a 64 bit OS / farm / assembly / etc. …the problem could also be because your SharePoint is 64-Bit but your app is running in 32-bit mode     I quickly exited 32 bit PowerShell ISE and ran the same code under 64 bit PowerShell ISE.  All errors were gone and the script ran successfully.   Conclusion     The rules of 32 vs. 64 bit interoperability do not always apply evenly across all applications and scenarios.  In my case I wasn’t able to run 32 bit PowerShell against 64 bit SharePoint DLLs.  I’m updating all of my links and shortcuts to use 64 bit PowerShell where appropriate.  I’m quite surprised it has taken me this long to run into this error, but sometimes blind luck is all that keeps you from running into errors.  Lesson learned and hopefully this can benefit you as well.  Happy SharePointing all!         -Frog Out   Links http://blogs.technet.com/b/office2010/archive/2010/02/23/understanding-64-bit-office.aspx http://social.msdn.microsoft.com/Forums/en-US/sharepointdevelopment/thread/a732cb83-c2ef-4133-b04e-86477b72bbe3/ http://stackoverflow.com/questions/266255/filenotfoundexception-with-the-spsite-constructor-whats-the-problem

    Read the article

  • SQLRally and SQLRally - Session material

    - by Hugo Kornelis
    I had a great week last week. First at SQLRally Nordic , in Stockholm, where I presented a session on how improvements to the OVER clause can help you simplify queries in SQL Server 2012 enormously. And then I continued straight on into SQLRally Amsterdam , where I delivered a session on the performance implications of using user-defined functions in T-SQL. I understand that both events will make my slides and demo code downloadable from their website, but this may take a while. So those who do not...(read more)

    Read the article

  • Advanced 2D and 3D Design Tools in ZWCAD 2010

    Last time I introduced you my initial experiences with a powerful CAD software - ZWCAD 2010 (http://www.zwcad.org/products_download_list.php?id=107). As I continued with my testing, I found more surp... [Author: Damian Chloe - Computers and Internet - March 29, 2010]

    Read the article

  • gnome-tweak-tool doesn't start due to "ImportError: No module named gi" error

    - by Khajak Vahanyan
    I am using Ubuntu 11.10 with Gnome Shell and have a problem with gnome-tweak-tool. When I click on it, it does nothing and when I try to open with terminal it gives this error: Traceback (most recent call last): File "/usr/bin/gnome-tweak-tool", line 22, in <module> import gi ImportError: No module named gi I googled a bit, found some solutions (reinstalled some python-gobject packages), but still didn't help./

    Read the article

  • Ubuntu 12.04 LTS installation problem

    - by Zxy
    I am trying to install Ubuntu 12.04 LTS on my PC using WUBI. However, I keep getting this error: An error occured: *Error executing command >>command=C:\\System32\bcdedit.exe /set {2708afc0-9ffa-11e1-bc51-d167219ffa25} device partition=E: >>retval=1 >>stderr=An error has occured setting the element data. The request is not supported. >>stdout= For more information, please see the logfile:* Logfile: 06-11 10:57 DEBUG TaskList: ## Finished choose_disk_sizes 06-11 10:57 DEBUG TaskList: ## Running expand_diskimage... 06-11 10:59 DEBUG TaskList: ## Finished expand_diskimage 06-11 10:59 DEBUG TaskList: ## Running create_swap_diskimage... 06-11 10:59 DEBUG TaskList: ## Finished create_swap_diskimage 06-11 10:59 DEBUG TaskList: ## Running modify_bootloader... 06-11 10:59 DEBUG TaskList: New task modify_bcd 06-11 10:59 DEBUG TaskList: ### Running modify_bcd... 06-11 10:59 DEBUG WindowsBackend: modify_bcd Drive(C: hd 51255.1171875 mb free ntfs) 06-11 10:59 ERROR TaskList: Error executing command >>command=C:\Windows\System32\bcdedit.exe /set {2708afc0-9ffa-11e1-bc51-d167219ffa25} device partition=E: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\System32\bcdedit.exe /set {2708afc0-9ffa-11e1-bc51-d167219ffa25} device partition=E: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 06-11 10:59 DEBUG TaskList: # Cancelling tasklist 06-11 10:59 DEBUG TaskList: New task modify_bcd 06-11 10:59 ERROR root: Error executing command >>command=C:\Windows\System32\bcdedit.exe /set {2708afc0-9ffa-11e1-bc51-d167219ffa25} device partition=E: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\System32\bcdedit.exe /set {2708afc0-9ffa-11e1-bc51-d167219ffa25} device partition=E: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 06-11 10:59 DEBUG TaskList: New task modify_bcd 06-11 10:59 DEBUG TaskList: New task modify_bcd 06-11 10:59 DEBUG TaskList: ## Finished modify_bootloader 06-11 10:59 DEBUG TaskList: # Finished tasklist*

    Read the article

  • What’s coming up

    - by GavinPayneUK
    In the last couple of months my community activities list has had things leave it and new things join it, so I thought share, and promote, my future plans. Microsoft Certified Architect : SQL Server – Giving back Preparing for my MCA Board was the hardest, yet in hindsight the most rewarding and interesting, thing I’ve ever done.  The subjects it covers still interest me to the extent that I’m now contributing to the MCA programme itself, allowing the next people through the certification’s...(read more)

    Read the article

  • The Oracle Cash Management Secret Very Few Customers Know About

    - by Theresa Hickman
    Did you know that Oracle Cash Management has a robust positioning feature? I had no idea. I was under the mistaken impression that Oracle Cash Management only did bank statement reconciliations. It seems I am not alone. In fact, many Oracle Financials customers are also not aware of this even though it is delivered for free with the Oracle Financials license. Even better, last week, Oracle released an enhancement to Oracle Cash Management for Release 12 that will greatly help customers with their cash positioning needs. As we all know, credit is tight these days. Companies need better visibility of their cash and other liquidity positions to make better use of their cash resources. Today, many customers are managing their cash positions manually using spreadsheets. We also hear how many of them are maintaining larger than normal balances in numerous bank accounts because they just do not have the visibility, and therefore the comfort they need. Although spreadsheets may work in the short-term, they are not the best way to manage your cash positions for the long-term especially if you have dozens, or even hundreds of bank and brokerage accounts. Also, spreadsheets are a lot more risky because they can be overwritten, deleted, difficult to audit, etc. With the newly enhanced positioning feature in Oracle Cash Management, customers can manage their daily cash positions using an excel-like interface that is very flexible and user-configurable. You can link the worksheet to an unlimited number of bank accounts to automatically retrieve your opening balances, the current/intra-day cash inflows and outflows, as well as your expected cash flows from your Fx, Investment and Debt positions if you have Oracle's Treasury module . Oracle Cash Management also has direct integration with Oracle Receivables, Oracle Payables, and Payroll, which adds to the comprehensive picture of what's happening with your organizations' cash in real-time. Here's a screen shot of what the cash positioning page looks like: View image As you can see, your Treasurers can obtain a holistic view of all cash positions across any number of bank accounts as well as other sources of cash flow movements. Depending on how they manage their accounts, they can also use this feature to initiate or monitor bank account sweeps or transfers between their zero balance accounts (ZBA) or cash pools. The cash position worksheet provide drill down for more detail and the ability to manually enter items directly into the worksheet for even greater flexibility and control. The enhancements to this feature were released last week. The following list the patches for Release 12.0.6 and 12.1.1: For more information, visit the following website. http://launch.oracle.com. PIN: yes2try

    Read the article

  • SQLPASS Summit 2011 -- I'm going but not as a speaker

    - by NeilHambly
    This post is about my attempt and slight failure @ getting a presenting session @ this year’s SQLPASS Summit 2011 I had submitted for the 1st time 2 submissions (think we had max of 4 we could enter, but I was happy to go with just 2 this time, 1 I had already presented & 1 was nearly completed) My general session (75 minutes) the same session on “Waits” I had done @ SQLBits 8 back in Brighton last April, and a new 1/2 day 3.5 hours format which is a session I’m completing on SQLOS layer Well...(read more)

    Read the article

  • Overwrite previous output in Bash instead of appending it

    - by NES
    For a bash timer i use this code: #!/bin/bash sek=60 echo "60 Seconds Wait!" echo -n "One Moment please " while [ $sek -ge 1 ] do echo -n "$sek " sleep 1 sek=$[$sek-1] done echo echo "ready!" That gives me something like that One Moment please: 60 59 58 57 56 55 ... Is there a possibility to replace the last value of second by the most recent so that the output doesn't generate a large trail but the seconds countdown like a real time at one position? (Hope you understand what i mean :))

    Read the article

  • How to Make Money Online Part 4 - Keyword Research is Critical

    Last time we looked at choosing the best products on ClickBank to promote, now we will look at keyword research, and how critical it is to make money online. You may have heard the term keyword research if you have been looking around for a while, but if you are looking to make or earn money online for the first time, I will give a brief description of what keyword research is and why it is so important.

    Read the article

  • New ADF Design Paper Covering Task Flows

    - by Duncan Mills
    Just published to OTN today is a new paper that I've put together Task Flow Design Fundamentals. This paper collates a whole bunch of random thoughts about ADF Controller design that I've collected over the last couple of years. Hopefully this will be a useful aid to help you think about your task flow design in a more structured way.

    Read the article

  • Motherboard P5Q SE2 - GeForce GT210 - Intel Core 2 Duo E4400 - ubuntu compatibility

    - by Massimo
    I'm completely new to Ubuntu (and Linux in general), my target (as first PC) is to assemble it in order to have a Media Center. As you read in the subject I have the following components : Motherboard Asus P5Q SE2 GeForce GT210 1GB Intel Core 2 Duo E4400 The remaining parts have to be purchased. I'd like to know if they are compatible with Ubuntu 12 that is the last version. Thanks for any help Massimo

    Read the article

  • This Year's SQL Christmas Card

    - by Mike C
    This year's Christmas Card is similar to last year's. I used the geometry data type again for a spatial data design. Just download the attachment, unzip the .SQL script and run it in SSMS. Then look at the Spatial Data preview tab for the result. Also don't forget to visit http://www.noradsanta.org/ if your kids want to track Santa. Merry Christmas, Happy Holidays and have a great new year!...(read more)

    Read the article

  • How do I install a 32-bit Java runtime on an amd64 server with multiarch?

    - by kbyrd
    I'm a long time Ubuntu user, but I haven't been following the community for the last several versions. I just did fresh default minimal amd64 install of Oneiric and I need a 32-bit JRE for a particular application. I last did this on 10.10, so I am not familiar with the multiarch stuff. Instead of installing ia32-libs, I read a bit and tried: aptitude install default-jre-headless:i386 But that just got me: The following NEW packages will be installed: default-jre-headless{b} openjdk-6-jre-headless{ab} The following packages are RECOMMENDED but will NOT be installed: icedtea-6-jre-cacao icedtea-6-jre-jamvm 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 27.3 MB of archives. After unpacking 82.1 MB will be used. The following packages have unmet dependencies: default-jre-headless: Depends: java-common which is a virtual package. openjdk-6-jre-headless: Depends: openjdk-6-jre-lib (>= 6b23~pre10-0ubuntu5) which is a virtual package. Depends: ca-certificates-java which is a virtual package. Depends: tzdata-java which is a virtual package. Depends: java-common (>= 0.28) which is a virtual package. Depends: libcups2 but it is not going to be installed. Depends: liblcms1 but it is not going to be installed. Depends: libjpeg62 but it is not going to be installed. Depends: libnss3-1d (>= 3.12.9+ckbi-1.82-0ubuntu4) but it is not going to be installed. Depends: libc6 (>= 2.11) but it is not going to be installed. Depends: libfreetype6 (>= 2.2.1) but it is not going to be installed. Depends: libgcc1 (>= 1:4.1.1) but it is not going to be installed. Depends: libstdc++6 (>= 4.1.1) but it is not going to be installed. Depends: zlib1g (>= 1:1.1.4) but it is not going to be installed. The following actions will resolve these dependencies: Keep the following packages at their current version: 1) default-jre-headless [Not Installed] 2) openjdk-6-jre-headless [Not Installed] Accept this solution? [Y/n/q/?] q Is aptitude not installing the 32-bit versions of the dependencies? What is the right way to do this? I'll likely want both a 64-bit and a 32-bit JRE if that matters.

    Read the article

  • Back from the OData Roadshow

    - by Fabrice Marguerie
    I'm just back from the OData Roadshow with Douglas Purdy and Jonathan Carter. Paris was the last location of seven cities around the world.If there was something you wanted to know about OData, that was the place to be!These guys gave a great tour around OData. I learned things I didn't know about OData and I was able to give a demo of Sesame to the audience.More ideas and use cases popping-up!

    Read the article

  • How do I get my clipboard (copy and paste) working again?

    - by Alex Black
    I'm running Ubuntu 9.04, and out of the blue I can no longer cut and paste, I imagine if I restart my computer I'll be able to, but thats a pain, how can I fix/reset the clipboard? Type "hello" into Text Editor Highlight the text "hello" Press CTRL-C See the text become unhighlighted (is this normal?) Press CTRL-V See the word "network" get pasted in.. perhaps that was the last thing I copied when it was still working?

    Read the article

  • Flash video doesn't work after upgrade to 11.10

    - by keekerdc
    I've found that after upgrading to 11.10 (32 bit), I can no longer watch streaming flash video on sites such as Ustream or Justin.tv. I've spent the last several hours installing this package and uninstalling that package, and purging everything flash-related and starting over, to no avail. I was curious if anybody else has run into similar issues and found a fix. I'm running an NVIDIA card, and I've tried both sets of drivers available.

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    - by Jason Fitzpatrick
    Last week we showed you how to set up a simple, but strongly encrypted, TrueCrypt volume to help you protect your sensitive data. This week we’re digging in deeper and showing you how to hide your encrypted data within your encrypted data. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • Internet At Home

    Networking used to be just for businesses at the office, but one of the biggest changes of the last few years is the increased necessity of a network at home. Ten years ago it was common for houses ... [Author: Chris Holgate - Computers and Internet - April 06, 2010]

    Read the article

  • SQLAuthority News – Presenting at Great Indian Developer Summit 2012 – SQL Server Misconception and Resolutions

    - by pinaldave
    Earlier during TechEd 2012, I presented a session on SQL Server Misconception and Resolutions. It was a pleasure to present this session with Vinod Kumar during the event. Great Indian Developer Summit is around the corner and I will be presenting there once again with the same topic. We had an excellent response during the last event; the hall was so filled, but there were plenty who were not able to get into the session as there was no place for them to sit or stand inside. Well, here is another chance for all who missed the presentation. New Additions During the last session, we were a two-presenter tag team, and we presented the session in a sense that it would suit two speakers in one stage. But this time, I am the only presenter, so I decided to present this session in a much different way. I will still assume there are two presenters. One of the presenters will be me, of course, and the second person will be YOU! Yes, you read that right – you will be presenting this session with me. If you wonder how, well, you will have to attend the session to figure it out. Talking Points We will be talking about the following topics in the session which we will relate to SQL Server: Moon Landing Napoléon Bonaparte Wall of China Bollywood …and of course, SQL Server itself. I promise that this 45 minute- presentation will be the one of the highlights of the event for you. Goodies I can only promise 20 goodies as of the moment. I might bring more when you meet me there. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “The earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session, we will see various database misconceptions prevailing and their resolutions with the aid of the demos. In this unique session, the audience will be a part of the conversation and resolution. Date and Time: April 17, 2012, 16:55 to 17:40 Location: J. N. Tata Auditorium, National Science Symposium Complex (NSSC), Bangalore, India Add to Calendar Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >