Search Results

Search found 12083 results on 484 pages for 'general purpose'.

Page 432/484 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • javascript multidimensional array?

    - by megapool020
    Hi there, I hope I can make myself clear in English and in what I want to create. I first start with what I want. I want to make a IBANcalculator that can generate 1-n IBANnumbers and also validate a given IBANnumber. IBANnumbers are used in many countries for payment and the tool I want to make can be used to generate the numbers for testing purpose. On wikipedia (Dutch site) I found a list with countries and their way of defining the IBANnumber. What I want to do it to make a kind of an array that holds all the countries with their name, the code, there IBANlength, the bankingformat and the account format. The array needs to be used to: Generate a select list (for selecting a country) used to check part for generating numbers used to check part for validating number I don't know if an array is the best way, but that's so far the most knowledge I have. I allready made a table like this that holds the info (this table isn't used anyware, but it was a good way for me to show you what I have in mind about how the structure is): <table> <tr> <td>countryname</td> <td>country code</td> <td>valid IBAN length</td> <td>Bank/Branch Code (check1, bank, branch)</td> <td>Account Number (check2, number, check3)</td> <tr> <tr> <td>Andorra</td> <td>AD</td> <td>24</td> <td>0 4n 4n</td> <td>0 12 0 </td> <tr> <tr> <td>België</td> <td>BE</td> <td>16</td> <td>0 3n 0 </td> <td>0 7n 2n</td> <tr> <tr> <td>Bosnië-Herzegovina</td> <td>BA</td> <td>20</td> <td>0 3n 3n</td> <td>0 8n 2n</td> <tr> </table> and many more ;-) I hope someone can help me with this. Tnx in advance

    Read the article

  • How do I add "Press any key to boot from usb" when installing Windows from a flash drive? (Grub4dos question / how to remove a bootloader)

    - by Vincent
    Hi there! I've been struggling with this problem for a while now and finially decided to ask for help. Let me first explain what the main purpose of the app is: to provide the a very easy to use way of backing up files, after which I format the drive and start Windows 7 setup. I do this by booting WinPE, which runs a script to detect Windows installations and then opens a file browser. After the file browser is closed, the script continues and formats the drive that contains the Windows installation, and starts an unattended Windows 7 install. Now here is the problem: When you start Windows setup or WinPE from a dvd, you get a nice option to "Press any key to boot from DVD". This is to prevent the computer from booting the DVD when the first phase of the installation is complete and the computer reboots. However, when booting from a flash drive, Windows does not provide this option: it simply boots the flash drive every reboot. To replicate the "press any key" function, I installed Grub4Dos, which works great. It provides a small menu, the first standard item being "Continue installation", the second being "start installation". After quite a lot of tweaking, I got everything working: Start installation starts WinPE, which in turn starts the Windows installation. At first reboot, the Grub4Dos menu comes up, counts 5 seconds and boots the second stage of the installation. Here, I am greeted with the error: "Windows setup could not configure windows to run on this computer's hardware." When I boot into WinPE the normal way (put the bootmgr on the stick root) and change my bios to boot from the primary hdd after first reboot, I don't get this error. I've been looking around, and the only thing I could find was that the BIOS automatically names the boot device hd0, and that Windows can only be run / installed to hd 0. I'm not sure if this is the problem. I read about remapping to solve this problem, but to do that you have to know the phisical location of the hard drive and partition, like hd(0,1). I want this flash drive to work on any PC, regardless of where the OS is installed, so that's not really a possibility. A possible fix I thought of is removing the bootloader from the flash drive when I'm in WinPE. That way, when the pc reboots the BIOS will not see the flash drive as a boot drive and instead boot the primary hdd. I have yet to find a way to do this. Thank you for reading my question, and if you have any suggestion, please do.

    Read the article

  • installing a package with xml based configuration in python

    - by saminny
    Hi, I am planning to write a generic python module for installing a package. This script should retrieve the module from a remote machine or locally and install it on a given host and user. However, there needs to be changes made to the package files based on the host, user and given environment. My approach is to use XML to describe changes to be made to package files based on environment. It will first extract the package to the user directory and then using an xml configuration file, it should replace the file values in the package directory. The xml would look something like this: <package version="1.3.3"> <environment type="prod"> <file dir="d1/d2" name="f1"> <var id="RECV_HOST" value="santo"> <var id="RECV_PORT" value="RECV_PORT_SERVICE" type="service"> <var id="JEPL_SERVICE_NAME" value="val_omgact"> </file> <var dir="d4/d3/s2" name="f2"> <var id="PRECISION" value="true"> <var id="SEND_STATUS_CODE" value="323"> <var id="JEPL_SERVICE_NAME" value="val_omgact"> </file> </environment> <environment type="qa"> <file dir="d1/d2" name="f1"> <var id="RECV_HOST" value="test"> <var id="RECV_PORT" value="1444"> <var id="JEPL_SERVICE_NAME" value="val_tsdd"> </file> <file dir="d4/d3/s2" name="f2"> <var id="PRECISION" value="false"> <var id="SEND_STATUS_CODE" value="323"> <var id="JEPL_SERVICE_NAME" value="val_dsd"> </file> </environment> </package> What are your thoughts on this approach? Is there an existing python module, package or script that I could use for this purpose since this seems fairly generic and can be used for any installation. Thanks! Sam

    Read the article

  • While within a switch block

    - by rursw1
    Hi, I've seen the following code, taken from the libb64 project. I'm trying to understand what is the purpose of the while loop within the switch block - switch (state_in->step) { while (1) { case step_a: do { if (codechar == code_in+length_in) { state_in->step = step_a; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar = (fragment & 0x03f) << 2; case step_b: do { if (codechar == code_in+length_in) { state_in->step = step_b; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar++ |= (fragment & 0x030) >> 4; *plainchar = (fragment & 0x00f) << 4; case step_c: do { if (codechar == code_in+length_in) { state_in->step = step_c; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar++ |= (fragment & 0x03c) >> 2; *plainchar = (fragment & 0x003) << 6; case step_d: do { if (codechar == code_in+length_in) { state_in->step = step_d; state_in->plainchar = *plainchar; return plainchar - plaintext_out; } fragment = (char)base64_decode_value(*codechar++); } while (fragment < 0); *plainchar++ |= (fragment & 0x03f); } } What can give the while? It seems that anyway, always the switch will perform only one of the cases. Did I miss something? Thanks.

    Read the article

  • How can I enable a debugging mode via a command-line switch for my Perl program?

    - by Michael Mao
    I am learning Perl in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debug_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I want a real minimal solution that doesn't depend on any other module than the "default". And I cannot have any control on the environment where this script will be executed...

    Read the article

  • C#, cannot understand this error?

    - by 5YrsLaterDBA
    I am using VS2008. I have a project, SystemSoftware project, connect with a database and we are using L2E. I have a RuntimeInfo class which contains some shared information there. It looks like this: public class RuntimeInfo { public const int PWD_ExpireDays = 30; private static RuntimeInfo thisObj = new RuntimeInfo(); public static string AndeDBConnStr = ConfigurationManager.ConnectionStrings["AndeDBEntities"].ConnectionString; private RuntimeInfo() { } /// <summary> /// /// </summary> /// <returns>Return this singleton object</returns> public static RuntimeInfo getRuntimeInfo() { return thisObj; } } Now I added a helper project, AndeDataViewer, to the solution which creates a simple UI to display data from the database for testing/verification purpose. I don't want to create another set of Entity Data Model in the helper project. I just added all related files as a link in the new helper project. In the AndeDataViewer project, I get the connection string from above RuntimeInfo class which is a class from my SystemSoftware project as a linked file. The code in AndeDataViewer is like this: public class DbAccess : IDisposable { private String connStr = String.Empty; public DbAccess() { connStr = RuntimeInfo.AndeDBConnStr; } } My SystemSoftware works fine that means the RuntimeInfo class has no problem there. But when I run my AndeDataViewer, the statement inside above constructor, connStr = RuntimeInfo.AndeDBConnStr; , throws an exception. The exception is copied here: System.TypeInitializationException was unhandled Message="The type initializer for 'MyCompany.SystemSoftware.SystemInfo.RuntimeInfo' threw an exception." Source="AndeDataViewer" TypeName="MyCompany.SystemSoftware.SystemInfo.RuntimeInfo" StackTrace: at AndeDataViewer.DbAccess..ctor() in C:\workspace\SystemSoftware\Other\AndeDataViewer\AndeDataViewer\DbAccess.cs:line 17 at AndeDataViewer.Window1.rbRawData_Checked(Object sender, RoutedEventArgs e) in C:\workspace\SystemSoftware\Other\AndeDataViewer\AndeDataViewer\Window1.xaml.cs:line 69 at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) .... InnerException: System.NullReferenceException Message="Object reference not set to an instance of an object." Source="AndeDataViewer" StackTrace: at MyCompany.SystemSoftware.SystemInfo.RuntimeInfo..cctor() in C:\workspace\SystemSoftware\SystemSoftware\src\systeminfo\RuntimeInfo.cs:line 24 InnerException: I cannot understand this because it looks fine to me but why there is an exception? we cannot access static variable when a class is a linked class? A linked class should be the same as the local class I think. "Linked" here means when I add file I use "Add As Link".

    Read the article

  • Java array of arry [matrix] of an integer partition with fixed term

    - by user335209
    Hello, for my study purpose I need to build an array of array filled with the partitions of an integer with fixed term. That is given an integer, suppose 10 and given the fixed number of terms, suppose 5 I need to populate an array like this 10 0 0 0 0 9 0 0 0 1 8 0 0 0 2 7 0 0 0 3 ............ 9 0 0 1 0 8 0 0 1 1 ............. 7 0 1 1 0 6 0 1 1 1 ............ ........... 0 6 1 1 1 ............. 0 0 0 0 10 am pretty new to Java and am getting confused with all the for loops. Right now my code can do the partition of the integer but unfortunately it is not with fixed term public class Partition { private static int[] riga; private static void printPartition(int[] p, int n) { for (int i= 0; i < n; i++) System.out.print(p[i]+" "); System.out.println(); } private static void partition(int[] p, int n, int m, int i) { if (n == 0) printPartition(p, i); else for (int k= m; k > 0; k--) { p[i]= k; partition(p, n-k, n-k, i+1); } } public static void main(String[] args) { riga = new int[6]; for(int i = 0; i<riga.length; i++){ riga[i] = 0; } partition(riga, 6, 1, 0); } } the output I get it from is like this: 1 5 1 4 1 1 3 2 1 3 1 1 1 2 3 1 2 2 1 1 2 1 2 1 2 1 1 1 what i'm actually trying to understand how to proceed is to have it with a fixed terms which would be the columns of my array. So, am stuck with trying to get a way to make it less dynamic. Any help?

    Read the article

  • Retrieve KEYWORDS from META tag in a HTML WebPage using JAVA.

    - by kooldave98
    Hello all, I want to retrieve all the content words from a HTML WebPage and all the keywords contained in the META TAG of the same HTML webpage using Java. For example, consider this html source code: <html> <head> <meta name = "keywords" content = "deception, intricacy, treachery"> </head> <body> My very short html document. <br> It has just 2 'lines'. </body> </html> The CONTENT WORDS here are: my, very, short, html, document, it, has, just, lines Note: The punctuation and the number '2' are ruled out. The KEYWORDS here are: deception, intricacy, treachery I have created a class for this purpose called WebDoc, this is as far as I have been able to get. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URL; import java.util.Set; import java.util.TreeSet; public class WebDoc { protected URL _url; protected Set<String> _contentWords; protected Set<String> _keyWords public WebDoc(URL paramURL) { _url = paramURL; } public Set<String> getContents() throws IOException { //URL url = new URL(url); Set<String> contentWords = new TreeSet<String>(); BufferedReader in = new BufferedReader(new InputStreamReader(_url.openStream())); String inputLine; while ((inputLine = in.readLine()) != null) { // Process each line. contentWords.add(RemoveTag(inputLine)); //System.out.println(RemoveTag(inputLine)); } in.close(); System.out.println(contentWords); _contentWords = contentWords; return contentWords; } public String RemoveTag(String html) { html = html.replaceAll("\\<.*?>",""); html = html.replaceAll("&",""); return html; } public Set<String> getKeywords() { //NO IDEA ! return null; } public URL getURL() { return _url; } @Override public String toString() { return null; } }

    Read the article

  • factory class, wrong number of arguments being passed to subclass constructor

    - by Hugh Bothwell
    I was looking at Python: Exception in the separated module works wrong which uses a multi-purpose GnuLibError class to 'stand in' for a variety of different errors. Each sub-error has its own ID number and error format string. I figured it would be better written as a hierarchy of Exception classes, and set out to do so: class GNULibError(Exception): sub_exceptions = 0 # patched with dict of subclasses once subclasses are created err_num = 0 err_format = None def __new__(cls, *args): print("new {}".format(cls)) # DEBUG if len(args) and args[0] in GNULibError.sub_exceptions: print(" factory -> {} {}".format(GNULibError.sub_exceptions[args[0]], args[1:])) # DEBUG return super(GNULibError, cls).__new__(GNULibError.sub_exceptions[args[0]], *(args[1:])) else: print(" plain {} {}".format(cls, args)) # DEBUG return super(GNULibError, cls).__new__(cls, *args) def __init__(self, *args): cls = type(self) print("init {} {}".format(cls, args)) # DEBUG self.args = args if cls.err_format is None: self.message = str(args) else: self.message = "[GNU Error {}] ".format(cls.err_num) + cls.err_format.format(*args) def __str__(self): return self.message def __repr__(self): return '{}{}'.format(type(self).__name__, self.args) class GNULibError_Directory(GNULibError): err_num = 1 err_format = "destination directory does not exist: {}" class GNULibError_Config(GNULibError): err_num = 2 err_format = "configure file does not exist: {}" class GNULibError_Module(GNULibError): err_num = 3 err_format = "selected module does not exist: {}" class GNULibError_Cache(GNULibError): err_num = 4 err_format = "{} is expected to contain gl_M4_BASE({})" class GNULibError_Sourcebase(GNULibError): err_num = 5 err_format = "missing sourcebase argument: {}" class GNULibError_Docbase(GNULibError): err_num = 6 err_format = "missing docbase argument: {}" class GNULibError_Testbase(GNULibError): err_num = 7 err_format = "missing testsbase argument: {}" class GNULibError_Libname(GNULibError): err_num = 8 err_format = "missing libname argument: {}" # patch master class with subclass reference # (TO DO: auto-detect all available subclasses instead of hardcoding them) GNULibError.sub_exceptions = { 1: GNULibError_Directory, 2: GNULibError_Config, 3: GNULibError_Module, 4: GNULibError_Cache, 5: GNULibError_Sourcebase, 6: GNULibError_Docbase, 7: GNULibError_Testbase, 8: GNULibError_Libname } This starts out with GNULibError as a factory class - if you call it with an error number belonging to a recognized subclass, it returns an object belonging to that subclass, otherwise it returns itself as a default error type. Based on this code, the following should be exactly equivalent (but aren't): e = GNULibError(3, 'missing.lib') f = GNULibError_Module('missing.lib') print e # -> '[GNU Error 3] selected module does not exist: 3' print f # -> '[GNU Error 3] selected module does not exist: missing.lib' I added some strategic print statements, and the error seems to be in GNULibError.__new__: >>> e = GNULibError(3, 'missing.lib') new <class '__main__.GNULibError'> factory -> <class '__main__.GNULibError_Module'> ('missing.lib',) # good... init <class '__main__.GNULibError_Module'> (3, 'missing.lib') # NO! ^ why? I call the subclass constructor as subclass.__new__(*args[1:]) - this should drop the 3, the subclass type ID - and yet its __init__ is still getting the 3 anyway! How can I trim the argument list that gets passed to subclass.__init__?

    Read the article

  • Multiple value array

    - by Ant..
    I am new to jScript and have written this code [which works perfectly]. Its purpose is to test that the term for the amount of loan is not exceeded. Can the process be consolidated into one array where you pass the loan amount which returns the term based on the range i.e. 6000 to 7000 = 96 function TestMaxTerm() { var LnAmt = 14000 //Testing Purposes var Term = 0 //Testing Purposes if (LnAmt > 0 && LnAmt <= 1000){Term = 0;} if (LnAmt > 1000 && LnAmt <= 2000){Term = 1;} if (LnAmt > 2000 && LnAmt <= 3000){Term = 2;} if (LnAmt > 3000 && LnAmt <= 4000){Term = 3;} if (LnAmt > 4000 && LnAmt <= 5000){Term = 4;} if (LnAmt > 5000 && LnAmt <= 6000){Term = 5;} if (LnAmt > 6000 && LnAmt <= 7000){Term = 6;} if (LnAmt > 7000 && LnAmt <= 8000){Term = 7;} if (LnAmt > 8000 && LnAmt <= 9000){Term = 8;} if (LnAmt > 9000 && LnAmt <= 10000){Term = 9;} if (LnAmt > 10000 && LnAmt <= 11000){Term = 10;} if (LnAmt > 11000 && LnAmt <= 12000){Term = 11;} if (LnAmt > 11000){Term = 12;} //Obtain Maximum Term for Loan Amount var MaxTerm = new Array(); MaxTerm[0] = 24; MaxTerm[1]=36; MaxTerm[2] = 48; MaxTerm[3] = 60; MaxTerm[5] = 72; MaxTerm[5]=84; MaxTerm[6] = 96; MaxTerm[7] = 108; MaxTerm[8] = 120; MaxTerm[9]=132; MaxTerm[10] = 164; MaxTerm[11] = 176; MaxTerm[12] = 420; var text = MaxTerm[Term]; alert(text); }

    Read the article

  • How to use enumeration types in C++? Apply within example.

    - by Sagistic
    I do not understand how to use enumeration types. I understand what they are, but I don't quite get their purpose. I have made a program that inputs three sides of a triangle and outputs whether or not they are isosceles, scalene, or equilateral. I'm suppose to incorporate the enumeration type somewhere, but don't get where and how to use them. Any help would be appreciated. // h8p466x1.cpp : Defines the entry point for the console application. // include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { return 0; } include using namespace std; enum triangleType {scalene, isosceles, equilateral, noTriangle}; void triangleShape(double x, double y, double z); int main() { double x, y, z; cout << "Please enter the three sides of a triangle:" << endl; cout << "Enter side 1: "; cin >> x; cout << endl; cout << "Enter side 2: "; cin >> y; cout << endl; cout << "Enter side 3: "; cin >> z; cout << endl; triangleShape(x, y, z); return 0; } void triangleShape(double x, double y, double z) { if (((x+y) z) && ((x+z) y) && ((y+z) x)) { cout << "You have a triangle!" << endl; if (x == y && y == z) cout << "Your triangle is an equilateral" << endl; else if (x == y || x == z || y == z) cout << "Your triangle is an isosceles" << endl; else cout << "Your triangle is a scalene" << endl; } else if ((x+y) <= z || ((x+z) <= y) || ((y+z) <= x)) cout << "You do not have a triangle." << endl; }

    Read the article

  • How to use enumeration types in C++?

    - by Sagistic
    I do not understand how to use enumeration types. I understand what they are, but I don't quite get their purpose. I have made a program that inputs three sides of a triangle and outputs whether or not they are isosceles, scalene, or equilateral. I'm suppose to incorporate the enumeration type somewhere, but don't get where and how to use them. Any help would be appreciated. #include <iostream> using namespace std; enum triangleType {scalene, isosceles, equilateral, noTriangle}; void triangleShape(double x, double y, double z); int main() { double x, y, z; cout << "Please enter the three sides of a triangle:" << endl; cout << "Enter side 1: "; cin >> x; cout << endl; cout << "Enter side 2: "; cin >> y; cout << endl; cout << "Enter side 3: "; cin >> z; cout << endl; triangleShape(x, y, z); return 0; } void triangleShape(double x, double y, double z) { if (((x+y) > z) && ((x+z) > y) && ((y+z) > x)) { cout << "You have a triangle!" << endl; if (x == y && y == z) cout << "Your triangle is an equilateral" << endl; else if (x == y || x == z || y == z) cout << "Your triangle is an isosceles" << endl; else cout << "Your triangle is a scalene" << endl; } else if ((x+y) <= z || ((x+z) <= y) || ((y+z) <= x)) cout << "You do not have a triangle." << endl; }

    Read the article

  • Split a binary file into chunks c++

    - by L4nce0
    I've been bashing my head against trying to first divide up a file into chunks, for the purpose of sending over sockets. I can read / write a file easily without splitting it into chunks. The code below runs, works, kinda. It will write a textfile and has a garbage character. Which if this was just for txt, no problem. Jpegs aren't working with said garbage. Been at it for a few days, so I've done my research, and it's time to get some help. I do want to stick strictly to binary readers, as this need to handle any file. I've seen a lot of slick examples out there. (none of them worked for me with jpgs) Mostly something along the lines of while(file)... I subscribe to the, if you know the size, use a for-loop, not a while-loop camp. Thank you for the help!! vector<char*> readFile(const char* fn){ vector<char*> v; ifstream::pos_type size; char * memblock; ifstream file; file.open(fn,ios::in|ios::binary|ios::ate); if (file.is_open()) { size = fileS(fn); file.seekg (0, ios::beg); int bs = size/3; // arbitrary. Actual program will use the socket send size int ws = 0; int i = 0; for(i = 0; i < size; i+=bs){ if(i+bs > size) ws = size%bs; else ws = bs; memblock = new char [ws]; file.read (memblock, ws); v.push_back(memblock); } } else{ exit(-4); } return v; } int main(int argc, char **argv) { vector<char*> v = readFile("foo.txt"); ofstream myFile ("bar.txt", ios::out | ios::binary); for(vector<char*>::iterator it = v.begin(); it!=v.end(); ++it ){ myFile.write(*it,strlen(*it)); } }

    Read the article

  • Where can I find "canonical" sample programs that give quick refreshers for any given language? [on hold]

    - by acheong87
    Note to those close-voting this question: I understand this isn't a conventional programming question and I can agree with the reasoning that it's in the subjective domain (like best-of lists). In other ways though I think it's appropriate because, though it's not a "a specific programming problem," nor concerning "a software algorithm", nor (strictly) concerning "software tools commonly used by programmers", I think it is a "practical, answerable [problem that is] unique to the programming profession," and I think it is "based on an actual [problem I] face." I've been wanting this for some time now, because both approaches of (a) Googling for samples as I write every other line of code and (b) just winging it and seeing what errors crop up, distract me from coding efficiently. This note will be removed if the question gains popularity; this question will be deleted otherwise. I spend most of my time developing in C++, PHP, or Javascript, and every once in a while I have to do something in, say, VBA. In those times, it'd be convenient if I could just put up some sample code on a second monitor, something in between a cheat sheet (often too compact; and doesn't resemble anything that could actually compile/run), and a language reference (often too verbose, or segmented; requires extra steps to search or click through an index), so I can just glance at it and recall things, like how to loop through non-empty cells in a column. I think there's a hidden benefit to seeing formed code, that triggers the right spots in our brains to get back into a language we only need to brush up on. Similar in spirit is how http://ideone.com lets you click "Template" in any given language so you can get started without even doing a search. That template alone tells a lot, sometimes! Case-sensitivity, whitespace conventions, identifier conventions, the spelling of certain types, etc. I couldn't find a resource that pulled together such samples, so if there indeed doesn't exist such a repository, I was hoping this question would inspire professionals and experts to contribute links to the most useful sample code they've used for just this purpose: a keep-on-the-side, form-as-well-as-content, compilable/executable, reminder of a language's basic and oft-used features. Personally I am interested in seeing "samplers" for: VBA, Perl, Python, Java, C# (though for some of these autocompleters in Eclipse, Visual Studio, etc. help enough), awk, and sed. I'm tagging c++, php, and javascript because these are languages for which I'd best be able to evaluate whether proffered sample code matches what I had in mind.

    Read the article

  • How to generate a script for changing a column of varchar to xml type with data being converted?

    - by user1323981
    Initially I have a column (partner_email) of varchar.Now a recent change has come where it needs to be changed to be changed to the XML type but the previous records needs to be reserve into the new column. I have applied the below algorithm to accomplish the work /*********************************************************************** Purpose: To change the partner_email column from Varchar Type To Xml Type and convert the existing records from varchar to xml types. Programmers Notes: 1. Create a new Column by the name partner_email_temp of type XML into the Partner Table 2. Copy the Email contents from partner_email to partner_email_temp column after proper conversion N.B.~ The format will be <PartnerEmails> <Email>[email protected]</Email> <Email /> <Email /> </PartnerEmails> 3. Drop the exisitng partner_email 4. Rename partner_email_temp column to partner_email ***********************************************************************/ USE [Test] GO --===== Create a partner_email_temp column of type xml into the Partner table IF NOT EXISTS ( SELECT * FROM INFORMATION_SCHEMA.columns WHERE table_name = 'Partner' AND column_name = 'partner_email_temp' ) BEGIN ALTER TABLE [dbo].[Partner] ADD partner_email_temp XML NULL END GO --===== Copy the Email contents from partner_email to partner_email_temp column -- after proper conversion to xml type UPDATE [dbo].[Partner] SET partner_email_temp = CAST('<PartnerEmails><Email>' + REPLACE(partner_email, '&', '&amp;') + '</Email><Email></Email><Email></Email></PartnerEmails>' AS XML) GO --===== Drop the exisitng partner_email ALTER TABLE [dbo].[Partner] DROP COLUMN partner_email GO --===== Rename partner_email_temp column to partner_email Exec sp_RENAME 'Partner.partner_email_temp','partner_email','COLUMN' GO I works fine for the first time I ran. Now if I ran it for the next time, it am getting an error Msg 8116, Level 16, State 1, Line 4 Argument data type xml is invalid for argument 1 of replace function. Caution: Changing any part of an object name could break scripts and stored procedures. The intention is that, if the partner_email column is varchar, the script will change it to xml type and will convert all the data in xml format . If I ran it second time, it should ignore the statement. How to achieve this? I am trying in a different way DECLARE @columnDataType VARCHAR(50) SELECT @columnDataType = DATA_TYPE FROM INFORMATION_SCHEMA.columns WHERE table_name = 'Partner' AND column_name = 'partner_email' print @columnDataType IF (@columnDataType = 'varchar') BEGIN --===== Create a partner_email_temp column of type xml into the Partner table IF NOT EXISTS ( SELECT * FROM INFORMATION_SCHEMA.columns WHERE table_name = 'Partner' AND column_name = 'partner_email_temp' ) BEGIN ALTER TABLE [dbo].[Partner] ADD partner_email_temp XML NULL --===== Copy the Email contents from partner_email to partner_email_temp column -- after proper conversion to xml type UPDATE [dbo].[Partner] SET partner_email_temp = CAST('<PartnerEmails><Email>' + REPLACE(partner_email, '&', '&amp;') + '</Email><Email></Email><Email></Email></PartnerEmails>' AS XML) --===== Drop the exisitng partner_email ALTER TABLE [dbo].[Partner] DROP COLUMN partner_email --===== Rename partner_email_temp column to partner_email EXEC sp_RENAME 'Partner.partner_email_temp','partner_email','COLUMN' END END but getting error Msg 207, Level 16, State 1, Line 29 Invalid column name 'partner_email_temp'. Help needed

    Read the article

  • XNA Notes 009

    - by George Clingerman
    This past week the MVPs (myself included) were on Microsoft campus for the MVP summit. So I apologize in advance if you did something cool or heard of something cool happening with XNA and XBLIGs and it’s not in my notes. I did my best to stay on top of things, but honestly this community is fast and furious with what it’s doing and creating. I really can’t keep up and that’s fantastic! But here’s what I *did* notice while I was there on Microsoft Campus (and I did make sure to point out to the XNA team several of these very cool happenings while I had their ears). Time Critical XNA News: The XNA team wants you to know that Dream Build Play registration is now open! http://blogs.msdn.com/b/xna/archive/2011/02/28/registration-now-open-for-dream-build-play-2011-challenge.aspx Join the XNA-UK create on March 24, 2011 at the Microsoft Tech Days Conference http://xna-uk.net/blogs/darkgenesis/archive/2011/02/27/join-the-xna-uk-crew-at-the-microsoft-tech-days-conference-on-24th-march-2011.aspx XNA Team: Shawn Hargreaves shares one of the coolest things that’s happened in the XNA community http://blogs.msdn.com/b/shawnhar/archive/2011/03/02/xbox-indies-pivot-view.aspx Nick Gravelyn continues his unique marketing/work prioritization strategy as he tries to get to 5,000 Pixel Man users before he makes Pixel Man 2 (and he’s almost there!) http://nickgravelyn.com/pixelman2/ XNA MVPs: A lot of the XNA MVPs were at the Microsoft MVP Summit 2011. Due to NDAs, most things can’t be shared, but I’m sure if you’re curious you could ask them about the general vibe and feeling they got from the team and the future of XNA/XBLIG and more. Catalin Zima and team release the free WP7 game Chickens Can Dream http://twitter.com/CatalinZima/statuses/41174062923390976 http://www.amusedsloth.com/2011/02/chickens-can-dream-is-live/ Charles Humphrey (NemoKrad) posts his March talk source and PowerPoint http://xna-uk.net/blogs/randomchaos/archive/2011/03/04/march-2011-talk-post-processing-framework.aspx XNA Developers: Michael B. McLaughlin posts about ANTS Memory Profile and creates a CheckMemoryAllocationGame sample (extremely useful if you’re looking to see how much memory some operation allocates!) http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx http://geekswithblogs.net/mikebmcl/archive/2011/03/01/checkmemoryallocationgame-sample.aspx Andy Schatz (2009 IGF winner for Monaco) talking XNA at GDC 2011 http://www.gamasutra.com/view/news/33313/GDC_2011_Andy_Schatz_Ill_Make_My_Last_Game_When_I_Die.php Xbox LIVE Indie Games (XBLIG): Clover: A Curious Tale by BinaryTweed is coming as a Deal of the Week during St. Patricks Day http://majornelson.com/archive/2011/03/03/comingsoontothexboxlivemarketplacemarchthird.aspx Ska Studios away at GDC but still very post happy as always http://www.ska-studios.com/2011/03/02/swamped-picture-pack/ http://www.ska-studios.com/2011/02/28/the-february-showcase/ http://www.ska-studios.com/2011/02/25/good-morning-gato-51-smelling-the-roses/ Just Press Start interviews Matthew Mikuszewski of Darkwind Media about Blocks Indie http://justpressstart.net/?p=516 Gamergeddon Xbox Indie Game Round Up - February 27th http://www.gamergeddon.com/2011/02/27/xbox-indie-game-round-up-february-27th/ http://www.gamergeddon.com/category/xbox-360/indie-games/ GameMarx does a round up of all the Xbox Live Indie Game podcasts that are currently available http://www.gamemarx.com/news/2011/02/27/xbox-live-indie-game-podcasts.aspx GameMarx episode 11 http://www.gamemarx.com/video/the-show/26/ep-11-february-25-2011.aspx In perhaps what I feel is the most exciting news I’ve heard all week, Michael C. Neel (ViNull of GameMarx fame) re-launch XboxIndies.com! http://www.gamemarx.com/news/2011/03/01/the-relaunch-of-xboxindies-com.aspx http://xboxindies.com/ Armless Octopus shares a little of what they heard from Luke Schneider of Radiangames during his GDC 2011 talk http://www.armlessoctopus.com/2011/03/02/gdc-2011-luke-schneider-offers-insight-into-radiangames-success/ VVGindiecast Episode 1 with guests Derek Strickland(Mr_Deeke), Kris Steele(Kriswd40 from FunInfused Games) and Dave Voyles(From armlessoctopus.com) http://vvgtv.com/2011/02/25/vvgindiecast-xblig-podcast/ If you’re doing Xbox LIVE Indie Game Reviews get in touch with XboxIndies.com to get into their aggregated feed http://forums.create.msdn.com/forums/p/76931/467189.aspx#467189 B.U.T.T.O.N and Flotilla represented XNA very well at the Independent Games Festival (are there any more games entered that were created using XNA? Stand up and be heard!) http://www.igf.com/php-bin/entry2011.php?id=374 Armless Ocotopus interview at GDC 2011 with Soulcaster creator Ian Stocker http://www.armlessoctopus.com/2011/03/04/gdc-2011-interview-with-soulcaster-creator-ian-stocker/ MommysBestGames gets a nod in the DarkBasic newsletter where it features the Explosionade Editor (just do a search for Explosionade to get to the interesting bits!) http://www.thegamecreators.com/pages/newsletters/newsletter_issue_98.html You may be hearing the cries of FortressCraft (coming soon to XBLIG) being so wrong for stealing the idea from MineCraft. But did you know the the game MineCraft started from was an XNA game called Infiniminer? XNA is getting it’s fingers into EVERYTHING! http://www.minecraftwiki.net/wiki/Infiniminer XNA Development: TorqueX is NOT dead thanks to the tremendous efforts of the XNA Community working on the CEV (special thanks to @PinoEire for all his hard work on making that happen!) http://www.garagegames.com/community/blogs/view/20878 http://torquecev.com/ Dave Henry has posted XNA 3.x adding platformer start kit to the network game state management on his new site http://twitter.com/#!/mort8088/status/43407715908853760 http://mort8088.com/2011/03/03/xna-3-x-adding-platformer-starter-kit-to-network-game-state-management/ Mark Bamford releases XNAViewer 4.0, great for running XNA games inside of a Windows Form (for building level editors, etc.) http://twitter.com/#!/xzodia04/status/43466830412660736 http://xnaviewer.codeplex.com/ Unit testing an XNA game with Resharper and NUnit http://smnbss.wordpress.com/2011/02/28/planetx-unit-testing-an-xna-game-with-resharper-and-nunit-wp7-xbox-xna/ XNA for Silverlight developers: Part 5 - Input (touch + gestures) http://ht.ly/1bxwUE Mike McLaughlin shares a link he stumbled across for those looking to understand vector and matrix math http://twitter.com/#!/mikebmcl/status/42587074725036032 http://chortle.ccsu.edu/VectorLessons/vectorIndex.html DigitalRune Resources Pooling in XNA (Part 1) http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/84/DigitalRune-Helper-Library-Resource-Pooling-in-XNA-Part-1.aspx JohnK “bobthecbuilder” released a new SunBurn Update that lowers the requirements for Windows Games http://twitter.com/#!/bobthecbuilder/status/43457306578522112 http://www.synapsegaming.com/blogs/johnk/archive/2011/03/03/sunburn-update-windows-redistributable.aspx Quick update on the Indiefreaks Game Framework v0.4 development status http://indiefreaks.com/2011/03/04/quick-update-on-igf-v0-4-development/

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • Windows Azure: Announcing Support for Windows Server 2012 R2 + Some Nice Price Cuts

    - by ScottGu
    Today we released some great updates to Windows Azure: Virtual Machines: Support for Windows Server 2012 R2 Cloud Services: Support for Windows Server 2012 R2 and .NET 4.5.1 Windows Azure Pack: Use Windows Azure features on-premises using Windows Server 2012 R2 Price Cuts: Up to 22% Price Reduction on Memory-Intensive Instances Below are more details about each of the improvements: Virtual Machines: Support for Windows Server 2012 R2 This morning we announced the release of Windows Server 2012 R2 – which is a fantastic update to Windows Server and includes a ton of great enhancements. This morning we are also excited to announce that the general availability image of Windows Server 2012 RC is now supported on Windows Azure.  Windows Azure is the first cloud provider to offer the final release of Windows Server 2012 R2, and it is incredibly easy to launch your own Windows Server 2012 R2 instance with it. To create a new Windows Server 2012 R2 instance simply choose New->Compute->Virtual Machine within the Windows Azure Management Portal.  You can select the “Windows Server 2012 R2” image and create a new Virtual Machine using the “Quick Create” option: Or alternatively click the “From Gallery” option if you want to customize even more configuration options (endpoints, remote powershell, availability set, etc): Creating and instantiating a new Virtual Machine on Windows Azure is very fast.  In fact, the Windows Server 2012 R2 image now deploys and runs 30% faster than previous versions of Windows Server. Once the VM is deployed you can drill into it to track its health and manage its settings: Clicking the “Connect” button allows you to remote desktop into the VM – at which point you can customize and manage it as a full administrator however you want: If you haven’t tried Windows Server 2012 R2 yet – give it a try with Windows Azure.  There is no easier way to get an instance of it up and running! Cloud Services: Support for using Windows Server 2012 R2 with Web and Worker Roles Today’s Windows Azure release also allows you to now use Windows Server 2012 R2 and .NET 4.5.1 within Web and Worker Roles within Cloud Service based applications.  Enabling this is easy.  You can configure existing existing Cloud Service application to use Windows Server 2012 R2 by updating your Cloud Service Configuration File (.cscfg) to use the new “OS Family 4” setting: Or alternatively you can use the Windows Azure Management Portal to update cloud services that are already deployed on Windows Azure.  Simply choose the configure tab on them and select Windows Server 2012 R2 in the Operating System Family dropdown: The approaches above enable you to immediately take advantage of Windows Server 2012 R2 and .NET 4.5.1 and all the great features they provide. Windows Azure Pack: Use Windows Azure features on Windows Server 2012 R2 Today we also made generally available the Windows Azure Pack, which is a free download that enables you to run Windows Azure Technology within your own datacenter, an on-premises private cloud environment, or with one of our service provider/hosting partners who run Windows Server. Windows Azure Pack enables you to use a management portal that has the exact same UI as the Windows Azure Management Portal, and within which you can create and manage Virtual Machines, Web Sites, and Service Bus – all of which can run on Windows Server and System Center.  The services provided with the Windows Azure Pack are consistent with the services offered within our Windows Azure public cloud offering.  This consistency enables organizations and developers to build applications and solutions that can run in any hosting environment – and which use the same development and management approach.  The end result is an offering with incredible flexibility. You can learn more about Windows Azure Pack and download/deploy it today here. Price Cuts: Up to 22% Reduction on Memory Intensive Instances Today we are also reducing prices by up to 22% on our memory-intensive VM instances (specifically our A5, A6, and A7 instances).  These price reductions apply to both Windows and Linux VM instances, as well as for Cloud Service based applications: These price reductions will take effect in November, and will enable you to run applications that demand larger memory (such as SharePoint, Databases, in-memory analytics, etc) even more cost effectively. Summary Today’s release enables you to start using Windows Server 2012 R2 within Windows Azure immediately, and take advantage of our Cloud OS vision both within our datacenters – and using the Windows Azure Pack within both your existing datacenters and those of our partners. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >