Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 48/376 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • When is a>a true ?

    - by Cricri
    Right, I think I really am living a dream. I have the following piece of code which I compile and run on an AIX machine: AIX 3 5 PowerPC_POWER5 processor type IBM XL C/C++ for AIX, V10.1 Version: 10.01.0000.0003 #include <stdio.h> #include <math.h> #define RADIAN(x) ((x) * acos(0.0) / 90.0) double nearest_distance(double radius,double lon1, double lat1, double lon2, double lat2){ double rlat1=RADIAN(lat1); double rlat2=RADIAN(lat2); double rlon1=lon1; double rlon2=lon2; double a=0,b=0,c=0; a = sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1); printf("%lf\n",a); if (a > 1) { printf("aaaaaaaaaaaaaaaa\n"); } b = acos(a); c = radius * b; return radius*(acos(sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1))); } int main(int argc, char** argv) { nearest_distance(6367.47,10,64,10,64); return 0; } Now, the value of 'a' after the calculation is reported as being '1'. And, on this AIX machine, it looks like 1 1 is true as my 'if' is entered !!! And my acos of what I think is '1' returns NanQ since 1 is bigger than 1. May I ask how that is even possible ? I do not know what to think anymore ! The code works just fine on other architectures where 'a' really takes the value of what I think is 1 and acos(a) is 0.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • C lang. -- Error: Segmentation fault

    - by user233542
    I don't understand why this would give me a seg fault. Any ideas? This is the function that returns the signal to stop the program (plus the other function that is called within this): double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 > tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } double shoot(double Sol[N],double A,double c) { int i,j; /*Initial Conditions*/ for (i=0;i<buff;i++) { Sol[i] = 0.; } for (i=buff+l;i<N;i++) { Sol[i] = 2.*Pi; } Sol[buff]= 0; Sol[buff+1]= A*exp(sqrt(1+3*c)*dx); for (i=buff+2;i<buff+l;i++) { Sol[i] = (dx*dx)*( sin(Sol[i-1]) + c*sin(3.*(Sol[i-1])) ) - Sol[i-2] + 2.*Sol[i-1]; } return Sol[i-1]; } The values buff, l, N are defined using a #define statement. l = 401, buff = 50, N = 2000

    Read the article

  • alternatyve in php (from java)

    - by Valdas
    I have some problem have class in java. But now need have in php. it's possible rewrite? Code: public class RealityStream { public RealityStream(byte abyte0[]) { m_dstream = null; m_dstream = new DataInputStream(new ByteArrayInputStream(abyte0)); } public int available() throws IOException { return m_dstream.available(); } public float readFloat() throws IOException { byte byte0 = m_dstream.readByte(); byte byte1 = m_dstream.readByte(); byte byte2 = m_dstream.readByte(); byte byte3 = m_dstream.readByte(); int i = (byte3 & 0xff) << 24 | (byte2 & 0xff) << 16 | (byte1 & 0xff) << 8 | byte0 & 0xff; return Float.intBitsToFloat(i); } public int readInt() throws IOException { byte byte0 = m_dstream.readByte(); byte byte1 = m_dstream.readByte(); byte byte2 = m_dstream.readByte(); byte byte3 = m_dstream.readByte(); return (byte3 & 0xff) << 24 | (byte2 & 0xff) << 16 | (byte1 & 0xff) << 8 | byte0 & 0xff; } public int readShort() throws IOException { byte byte0 = m_dstream.readByte(); byte byte1 = m_dstream.readByte(); return byte1 << 8 | byte0 & 0xff; } public int readUnsignedChar() throws IOException { return m_dstream.readUnsignedByte(); } private DataInputStream m_dstream; } very need. very thaks.

    Read the article

  • How to do RLE (run length encoding) in C# on a byte array?

    - by CuriousCoder
    I am trying to XOR two bitmap files (their byte arrays) to produce a byte array that can be used to change image A into image B or vice versa. I am sending this over the network so I would like to do some basic compression before this happens. Is there a way to do RLE (run length encoding) in C# (using a built-in, or fast reliable 3rd party library) on a byte array for this purpose? Notes: If you are going to suggest an alternative to my approach please keep in mind that the decompression and transformation on the remote machine has to be as quick and efficient as possible.

    Read the article

  • Performance: use a BinaryReader on a MemoryStream to read a byte array, or read directly?

    - by Virtlink
    I would like to know whether using a BinaryReader on a MemoryStream created from a byte array (byte[]) would reduce performance significantly. There is binary data I want to read, and I get that data as an array of bytes. I am currently deciding between two approaches to read the data, and have to implement many reading methods accordingly. After each reading action, I need the position right after the read data, and therefor I am considering using a BinaryReader. The first, non-BinaryReader approach: object Read(byte[] data, ref int offset); The second approach: object Read(BinaryReader reader); Such Read() methods will be called very often, in succession on the same data until all data has been read. So, using a BinaryReader feels more natural, but has it much impact on the performance?

    Read the article

  • Print array variables when using or not using double quotes.

    - by Nano HE
    Hi, When I learning to print array variables, I found the white space inserted when double quoter used. Snippet code as below. Could you please tell me why? #!/usr/bin/perl -w use strict; use warnings; my @str_array = ("Perl","array","tutorial"); my @int_array = (5,7,9,10); print @str_array; print "\n"; # added the double quotes print "@str_array"; print "\n"; print @int_array; print "\n"; # added the double quotes print "@int_array"; Output: Perlarraytutorial Perl array tutorial 57910 5 7 9 10

    Read the article

  • How is conversion of float/double to int handled in printf?

    - by Sandip
    Consider this program int main() { float f = 11.22; double d = 44.55; int i,j; i = f; //cast float to int j = d; //cast double to int printf("i = %d, j = %d, f = %d, d = %d", i,j,f,d); //This prints the following: // i = 11, j = 44, f = -536870912, d = 1076261027 return 0; } Can someone explain why the casting from double/float to int works correctly in the first case, and does not work when done in printf? This program was compiled on gcc-4.1.2 on 32-bit linux machine.

    Read the article

  • How do you read data from a ADODB stream in ASP as byte values?

    - by user89691
    I have an ASP routine that gets a binary file's contents and writes it to a stream. The intention is to read it from the stream and process it st the server. So I have: ResponseBody = SomeRequest (SomeURL) ; var BinaryInputStream = Server.CreateObject ("ADODB.Stream") ; BinaryInputStream.Type = 1 ; // binary BinaryInputStream.Open ; BinaryInputStream.Write (ResponseBody) ; BinaryInputStream.Position = 0 ; var DataByte = BinaryInputStream.Read (1) ; Response.Write (typeof (DataByte)) ; // displays "unknown" How do I get the byte value of the byte I have just read from the stream? Asc () and byte () don't work (JScript) TIA

    Read the article

  • Are frameworks using byte-code generation creating leaky abstractions?

    - by Gabriel Šcerbák
    My point is, if you don't understand the abstraction of a framework, you can still decompile it and understand it, because you know the language e.g. Java. However, when byte-code generation happens, you have to understand even a lower level - JVM level byte-codes. I am really affraid of using any of such frameworks, which are many. Most of the time I think the reason for byte-code generation is simply lack of language features such as metaprogramming. Do you agree? What is your opinion and argument? How do you take over the problem with leaky abstractions in those frameworks?

    Read the article

  • Interop Structure: Should Unsigned Short be Mapped to byte[]?

    - by Ngu Soon Hui
    I have such a C++ structure: typedef struct _FILE_OP_BLOCK { unsigned short fid; // objective file ID unsigned short offset; // operating offset unsigned char len; // buffer length(update) // read length(read) unsigned char buff[MAX_BUFF_SIZE]; } FILE_OP_BLOCK; And now I want to map it in .Net. The tricky thing is that the I should pass a 2 byte array for fid, and integer for len, even though in C# fid is an unsigned short and len is an unsigned char I wonder whether my structure ( in C#) below is correct? public struct File_OP_Block { [MarshalAs(UnmanagedType.ByValArray, SizeConst = 2)] public byte[] fid; public ushort offset; public byte length; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 240)] public char[] buff; }

    Read the article

  • How can I find all attributes with single quotes in a Sublime Text 2 document and replace with double quotes?

    - by Brandon Durham
    I'm feeling particularly nit-picky today. I'm working in some HTML docs that have single quotes around all attribute values through the docs, like this: <div class='classone classtwo'> I'd love to be able to do a find-and-replace in each doc and replace with double quotes, like this: <div class="classone classtwo"> Many elements in the document will have multiple attributes: <div class='classone classtwo' data-scripts='lazyload'> And some will have the correct double quotes: <div class='classone classtwo' data-scripts="lazyload"> What's the best way to replace all single quotes wrapping values with double?

    Read the article

  • How can i handle mouse double click event on textbox?

    - by Kar Cheng
    Hello, I want to open one child window on mouse double click of textbox. I know the only event available in Silverlight is MouseLeftButtonDown and you have to simulate double click. However it still doesn't work when i click in the middle of the textbox. It only works when i double click on the border of the textbox. Anybody has any ideas of how can i do this? If any 3rd party library is available then also it's fine. Thanks in advance:)

    Read the article

  • byte-sized bit pattern in C and its relevance?

    - by Nikunj Banka
    I a reading Kerninghan and Ritchie's C programming language book and on page 37 it mentions byte sized bit patterns like : '\013' for vertical tab . '\007' for bell character . My doubts : What is byte sized in it and and what's a bit pattern ? What relevance does this hold and where can I apply it ? Is it in any sense related to escape sequences ? I can't seem to find any information what so ever about these byte sized bit patterns on the web . please help . thanks .

    Read the article

  • Perl: How do I extract certain bits from a byte and then covert these bits to a hex value?

    - by Siegfried Hepp
    I need to extract certain bits of a byte and covert the extract bits back to a hex value. Example (the value of the byte is 0xD2) : 76543210 bit position 11010010 is 0xD2 Bit 0-3 defines the channel which is 0010b is 0x2 Bit 4-5 defines the controller which is 01b is 0x1 Bit 6-7 defines the port which is 11b is 0x3 I somehow need to get from the byte is 0xD2 to channel is 0x2, controller is 0x1, port is 0x3 I googled allot and found the functions pack/unpack, vec and sprintf. But I'm scratching by head how to use the functions to achieve this. Any idea how to achieve this in Perl ?

    Read the article

  • How to determine if a decimal/double is an integer?

    - by Jim Geurts
    How do I tell if a decimal or double is an integer? For example: decimal d = 5.0; // Would be true decimal f = 5.5; // Would be false or double d = 5.0; // Would be true double f = 5.5; // Would be false The reason I would like to know this, is so that I can determine programmatically if I want to output the value using .ToString("N0") or .ToString("N2"). If there is no decimal point value, then I don't want to show that.

    Read the article

  • Why does DateTime to Unix time use a double instead of an integer?

    - by Earlz
    I'm needing to convert a DateTime to a Unix timestamp. So I googled it looking for some example code In just about all the results I see, they use double as the return for such a function, even when explicitly using floor to convert it to an integer. Unix timestamps are always integers. So what problem is there with using either long or int instead of double? static double ConvertToUnixTimestamp(DateTime date) { DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0); TimeSpan diff = date - origin; return Math.Floor(diff.TotalSeconds); }

    Read the article

  • Construction an logical expression which will count bits in a byte.

    - by danatel
    When interviewing new candidates, we usually ask them to write a piece of C code to count the number of bits with value 1 in a given byte variable (e.g. the byte 3 has two 1-bits). I know all the common answers, such as right shifting eight times, or indexing constant table of 256 precomputed results. But, is there a smarter way without using the precomputed table? What is the shortest combination of byte operations (AND, OR, XOR, +, -, binary negation, left and right shift) which computes the number of bites?

    Read the article

  • How to create a generic C# method that can return either double or decimal?

    - by CrimsonX
    I have a method like this: private static double ComputePercentage(ushort level, ushort capacity) { double percentage; if(capacity == 1) percentage = 1; // do calculations... return percentage; } Is it possible to make it of a generic type like "type T" where it can return either decimal or double, depending on the type of method expected (or the type put into the function?) I tried something like this and I couldn't get it to work, because I cannot assign a number like "1" to a generic type. I also tried using the "where T :" after ushort capacity) but I still couldn't figure it out. private static T ComputePercentage<T>(ushort level, ushort capacity) { T percentage; if(capacity == 1) percentage = 1; // error here // do calculations... return percentage; } Is this even possible? I wasn't sure, but I thought this post might suggest that what I'm trying to do is just plain impossible.

    Read the article

  • Does Oracle 10g automatically escape double quotes in recordsets?

    - by bitstream
    I am encountering an interesting issue with an application that was migrated from Oracle 9i to 10g. Previously, we had a problem when a field contained double quotes since Oracle recordsets encapsulated fields in double quotes. Example: "field1"||"field2"||"field "Y" 3"||"field4" Since the move to 10g, I believe that the Oracle client-side driver is parsing the double quotes and replacing them with &quot; Unfortunately I don't have an old 9i environment to test my theory. Have you seen similar behavior or can someone validate if my theory is true?

    Read the article

  • 4K sectors: Why are hard drives moving to 4096 byte sectors, vs. 512 byte sectors? Are 4K sectors a

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the actual de facto standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

  • Defining a Class in Objective C, XCode

    - by Brett
    Hello; I am new to Objective C, and am trying to write a class that defines a complex number. The code seems fine but when I print to the console, my values for instance variables are 0. Here is the code: // // ComplexNumber.h // Mandelbrot Set // // Created by Brett on 10-06-02. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import <Foundation/Foundation.h> #import <stdio.h> @interface ComplexNumber : NSObject { double real; double imaginary; } // Getters -(double) real; -(double) imaginary; // Setters -(void)setReal: (double) a andImaginary: (double) b; //Function -(ComplexNumber *)squared; @end // // ComplexNumber.m // Mandelbrot Set // // Created by Brett on 10-06-02. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "ComplexNumber.h" #import <math.h> #import <stdio.h> @implementation ComplexNumber -(double)real{ return self->real; } -(double)imaginary{ return self->imaginary; } -(void)setReal: (double) a andImaginary: (double) b{ self->real=a; self->imaginary=b; } -(ComplexNumber *)squared{ double a = pow(real,2); double b = pow(imaginary, 2); double c = 2*real*imaginary; ComplexNumber *d; [d setReal:(a-b) andImaginary: c]; return d; } @end In the App Delegate for debugging purposes I added: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { ComplexNumber *testNumber = [[ComplexNumber alloc] init]; [testNumber setReal:55.0 andImaginary:30.0]; NSLog(@"%d", testNumber.real); // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; return YES; } But the console returns 0 everytime. Help?

    Read the article

  • AES decryption in Java - IvParameterSpec to big

    - by user1277269
    Im going to decrypt a plaintext with two keys. As you see in the picture were have one encrypted file wich contains KEY1(128 bytes),KEYIV(128 bytes),key2(128bytes) wich is not used in this case then we have the ciphertext. The error I get here is "Exception in thread "main" java.security.InvalidAlgorithmParameterException: Wrong IV length: must be 16 bytes long. but it is 64 bytes." Picture: http://i264.photobucket.com/albums/ii200/XeniuM05/bg_zps0a523659.png public class AES { public static void main(String[] args) throws Exception { byte[] encKey1 = new byte[128]; byte[] EncIV = new byte[256]; byte[] UnEncIV = new byte[128]; byte[] unCrypKey = new byte[128]; byte[] unCrypText = new byte[1424]; File f = new File("C://ftp//ciphertext.enc"); FileInputStream fis = new FileInputStream(F); byte[] EncText = new byte[(int) f.length()]; fis.read(encKey1); fis.read(EncIV); fis.read(EncText); EncIV = Arrays.copyOfRange(EncIV, 128, 256); EncText = Arrays.copyOfRange(EncText, 384, EncText.length); System.out.println(EncText.length); KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); char[] password = "lab1StorePass".toCharArray(); java.io.FileInputStream fos = new java.io.FileInputStream( "C://ftp//lab1Store"); ks.load(fos, password); char[] passwordkey1 = "lab1KeyPass".toCharArray(); PrivateKey Lab1EncKey = (PrivateKey) ks.getKey("lab1EncKeys", passwordkey1); Cipher rsaDec = Cipher.getInstance("RSA"); // set cipher to RSA decryption rsaDec.init(Cipher.DECRYPT_MODE, Lab1EncKey); // initalize cipher ti lab1key unCrypKey = rsaDec.doFinal(encKey1); // Decryps first key UnEncIV = rsaDec.doFinal(EncIV); //decryps encive byte array to undecrypted bytearray---- OBS! Error this is 64 BYTES big, we want 16? System.out.println("lab1key "+ unCrypKey +" IV " + UnEncIV); //-------CIPHERTEXT decryption--------- Cipher AESDec = Cipher.getInstance("AES/CBC/PKCS5Padding"); //---------convert decrypted bytearrays to acctual keys SecretKeySpec unCrypKey1 = new SecretKeySpec(unCrypKey, "AES"); IvParameterSpec ivSpec = new IvParameterSpec(UnEncIV); AESDec.init(Cipher.DECRYPT_MODE, unCrypKey1, ivSpec ); unCrypText = AESDec.doFinal(EncText); // Convert decrypted cipher bytearray to string String deCryptedString = new String(unCrypKey); System.out.println(deCryptedString); }

    Read the article

  • Help with method logic in Java, hw

    - by Crystal
    I have a Loan class that in its printPayment method, it prints the amortization table of a loan for a hw assignment. We are also to implement a print first payment method, and a print last payment method. Since my calculation is done in the printPayment method, I didn't know how I could get the value in the first or last iteration of the loop and print that amount out. One way I can think of is to write a new method that might return that value, but I wasn't sure if there was a better way. Here is my code: public abstract class Loan { public void setClient(Person client) { this.client = client; } public Person getClient() { return client; } public void setLoanId() { loanId = nextId; nextId++; } public int getLoanId() { return loanId; } public void setInterestRate(double interestRate) { this.interestRate = interestRate; } public double getInterestRate() { return interestRate; } public void setLoanLength(int loanLength) { this.loanLength = loanLength; } public int getLoanLength() { return loanLength; } public void setLoanAmount(double loanAmount) { this.loanAmount = loanAmount; } public double getLoanAmount() { return loanAmount; } public void printPayments() { double monthlyInterest; double monthlyPrincipalPaid; double newPrincipal; int paymentNumber = 1; double monthlyInterestRate = interestRate / 1200; double monthlyPayment = loanAmount * (monthlyInterestRate) / (1 - Math.pow((1 + monthlyInterestRate),( -1 * loanLength))); System.out.println("Payment Number | Interest | Principal | Loan Balance"); // amortization table while (loanAmount >= 0) { monthlyInterest = loanAmount * monthlyInterestRate; monthlyPrincipalPaid = monthlyPayment - monthlyInterest; newPrincipal = loanAmount - monthlyPrincipalPaid; loanAmount = newPrincipal; System.out.printf("%d, %.2f, %.2f, %.2f", paymentNumber++, monthlyInterest, monthlyPrincipalPaid, loanAmount); } } /* //method to print first payment public double getFirstPayment() { } method to print last payment public double getLastPayment() { }*/ private Person client; private int loanId; private double interestRate; private int loanLength; private double loanAmount; private static int nextId = 1; } Thanks!

    Read the article

  • Hierarchy flattening of interfaces in WCF

    - by nmarun
    Alright, so say I have my service contract interface as below: 1: [ServiceContract] 2: public interface ILearnWcfService 3: { 4: [OperationContract(Name = "AddInt")] 5: int Add(int arg1, int arg2); 6: } Say I decided to add another interface with a similar add “feature”. 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); 6: } My class implementing the ILearnWcfServiceExtend ends up as: 1: public class LearnWcfService : ILearnWcfServiceExtend 2: { 3: public int Add(int arg1, int arg2) 4: { 5: return arg1 + arg2; 6: } 7:  8: public double Add(double arg1, double arg2) 9: { 10: return arg1 + arg2; 11: } 12: } Now when I consume this service and look at the proxy that gets generated, here’s what I see: 1: public interface ILearnWcfServiceExtend 2: { 3: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfService/AddInt", ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 7: double AddDouble(double arg1, double arg2); 8: } Only the ILearnWcfServiceExtend gets ‘listed’ in the proxy class and not the (base interface) ILearnWcfService interface. But then to uniquely identify the operations that the service exposes, the Action and ReplyAction properties are set. So in the above example, the AddInt operation has the Action property set to ‘http://tempuri.org/ILearnWcfService/AddInt’ and the AddDouble operation has the Action property of ‘http://tempuri.org/ILearnWcfServiceExtend/AddDouble’. Similarly the ReplyAction properties are set corresponding to the namespace that they’re declared in. The ‘http://tempuri.org’ is chosen as the default namespace, since the Namespace property on the ServiceContract is not defined. The other thing is the service contract itself – the Add() method. You’ll see that in both interfaces, the method names are the same. As you might know, this is not allowed in WSDL-based environments, even though the arguments are of different types. This is allowed only if the Name attribute of the ServiceContract is set (as done above). This causes a change in the name of the service contract itself in the proxy class. See that their names are changed to AddInt / AddDouble respectively. Lesson learned: The interface hierarchy gets ‘flattened’ when the WCF service proxy class gets generated.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >