Search Results

Search found 9387 results on 376 pages for 'double byte'.

Page 22/376 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Bit convector : Get byte array from string

    - by nCdy
    When I have a string like "0xd8 0xff 0xe0" I do Text.Split(' ').Select(part => byte.Parse(part, System.Globalization.NumberStyles.HexNumber)).ToArray(); But if I got string like "0xd8ffe0" I don't know what to do ? also I'm able for recommendations how to write byte array as one string.

    Read the article

  • template; Point<2, double>; Point<3, double>

    - by Oops
    Hi, I want to create my own Point struct it is only for purposes of learning C++. I have the following code: template <int dims, typename T> struct Point { T X[dims]; Point(){} Point( T X0, T X1 ) { X[0] = X0; X[1] = X1; } Point( T X0, T X1, T X2 ) { X[0] = X0; X[1] = X1; X[2] = X2; } Point<dims, int> toint() { //how to distinguish between 2D and 3D ??? Point<dims, int> ret = Point<dims, int>( (int)X[0], (int)X[1]); return ret; } std::string str(){ //how to distinguish between 2D and 3D ??? std::stringstream s; s << "{ X0: " << X[0] << " | X1: " << X[1] << " }"; return s.str(); } }; int main(void) { Point<2, double> p2d = Point<2, double>( 12.3, 45.6 ); Point<3, double> p3d = Point<3, double>( 12.3, 45.6, 78.9 ); Point<2, int> p2i = p2d.toint(); //OK Point<3, int> p3i = p3d.toint(); //m??? std::cout << p2d.str() << std::endl; //OK std::cout << p3d.str() << std::endl; //m??? std::cout << p2i.str() << std::endl; //m??? std::cout << p3i.str() << std::endl; //m??? char c; std::cin >> c; return 0; } of couse until now the output is not what I want. my questions is: how to take care of the dimensions of the Point (2D or 3D) in member functions of the Point? many thanks in advance Oops

    Read the article

  • Write string to fixed-length byte array in C#

    - by toasteroven
    somehow couldn't find this with a google search, but I feel like it has to be simple...I need to convert a string to a fixed-length byte array, e.g. write "asdf" to a byte[20] array. the data is being sent over the network to a c++ app that expects a fixed-length field, and it works fine if I use a BinaryWriter and write the characters one by one, and pad it by writing '\0' an appropriate number of times. is there a more appropriate way to do this?

    Read the article

  • C# : Attachment from byte array?

    - by JL
    I have a byte[] which is a file, and I would like to send it as an attachment using System.Net.Mail. I noticed the attachment class has 1 overload which accepts a stream. Attachment att = new Attachment(Stream contentStream,string name); Is it possible to pass the byte[] through this overload?

    Read the article

  • Problem in converting ToDictionary<Datetime,double>() usinh LINQ(C#3.0)

    - by Newbie
    I have written the below return (from p in returnObject.Portfolios.ToList() from childData in p.ChildData.ToList() from retuns in p.Returns.ToList() select new Dictionary<DateTime, double> () {p.EndDate, retuns.Value }).ToDictionary<DateTime,double>(); Getting error No overload for method 'Add' takes '1' arguments Where I am making the mistake I am using C#3.0 Thanks

    Read the article

  • Take a string to a byte[]

    - by Vaccano
    I have a string in my database that represents an image. It looks like this: 0x89504E470D0A1A0A0000000D49484452000000F00000014008020000000D8A66040.... <truncated for brevity> When I load it in from the database it comes in as a byte[]. How can I convert the string value to a byte array myself. (I am trying to remove the db for some testing code.)

    Read the article

  • BYTE typedef in VC++ and windows.h

    - by jules
    Hi, I am using Visual C++, and I am trying to include a file that uses BYTE (as well as DOUBLE, LPCONTEXT...) , which by default is not a defined type. If I include windows.h, it works fine, but windows.h also defines GetClassName wich I don't need. I am looking for an alternative to windows.h include, that would work with VC++ and would define most of the types like BYTE, DOUBLE ... Thanks

    Read the article

  • Convert 2 bytes to a number

    - by Vaccano
    I have a control that has a byte array in it. Every now and then there are two bytes that tell me some info about number of future items in the array. So as an example I could have: ... ... Item [4] = 7 Item [5] = 0 ... ... The value of this is clearly 7. But what about this? ... ... Item [4] = 0 Item [5] = 7 ... ... Any idea on what that equates to (as an normal int)? I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte). Is there any way to know this with out testing? Note: I am using C# 3.0 and visual studio 2008

    Read the article

  • Difference between C# and java big endian bytes using miscutil

    - by Eric Hauser
    I'm using the miscutil library to communicate between and Java and C# application using a socket. I am trying to figure out the difference between the following code (this is Groovy, but the Java result is the same): import java.io.* def baos = new ByteArrayOutputStream(); def stream = new DataOutputStream(baos); stream.writeInt(5000) baos.toByteArray().each { println it } /* outputs - 0, 0, 19, -120 */ and C#: using (var ms = new MemoryStream()) using (EndianBinaryWriter writer = new EndianBinaryWriter(EndianBitConverter.Big, ms, Encoding.UTF8)) { writer.Write(5000); ms.Position = 0; foreach (byte bb in ms.ToArray()) { Console.WriteLine(bb); } } /* outputs - 0, 0, 19, 136 */ As you can see, the last byte is -120 in the Java version and 136 in C#.

    Read the article

  • Python and C++ Sockets converting packet data

    - by yeus
    First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets. Now: assuming a packet definition in one of the C++ programs reads like this: unsigned char Packet[0x32]; // Packet[Length] int z=0; Packet[0]=0x00; // Spare Packet[1]=0x32; // Length Packet[2]=0x01; // Source Packet[3]=0x02; // Destination Packet[4]=0x01; // ID Packet[5]=0x00; // Spare for(z=0;z<=24;z+=8) { Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z)); Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z)); Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z)); Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z)); Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z)); Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z)); Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z)); Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z)); Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z)); Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z)); Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z)); } if(SendPacket(sock,(char*)&Packet,sizeof(Packet))) return 1; return 0; What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver?

    Read the article

  • Windows 2008 R2 remote desktop - Double Login

    - by Zulgrib
    After an Active Directory fail RDP connection started to ask for credentials twice (once on local RDP program, second time on remote's logon screen) I already looked at Windows 2008 R2 RDS - Double Login Solution provided there doesn't work for me. The server is alone, without AD/DNS services, RDP service isn't installed I tried every security settings on RDP-Tcp (RDP, Negotiate, SLL) Logon option is set to "Use credentials from the client" Both windows client and server use RDP 7.1 fPromptForPassword regitries are set to 0 Local Computer Policy\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Security\Always prompt for password upon connection is set to "Disabled" Why i am sure the problem comes from the server and not the client ? This problem affected a 3rd RDP program on Android too (it was directly showing "preparing desktop previously, on both MS RDP and the 3rd program) No bakcup are available (Else the Active Directory wouldn't be a fail, but just a lose of time) I am wondering if a rule linked to RDP got changed after the AD install+unistall, but i'm unable to find where. While this is not a critic problem, this is very annoying. I don't know if more information are needed, if it's the case and if you are patient enough, please tell me what is missing and i'll edit this post to add the missing informations.

    Read the article

  • ghc6 install trouble: hGetContents: invalid argument (invalid UTF-8 byte sequence)

    - by olimay
    Having trouble installing ghc6 on Ubuntu Maverick via apt. Here's what seems to be the relevant error that comes up when I try to (apt-get|aptitude) install ghc6: A package failed to install. Trying to recover: Setting up ghc6 (6.12.1-13ubuntu1) ... ghc-pkg: /home/opm/.ghc/i386-linux-6.12.1/package.conf.d/unix-compat-0.2-edefa7bced91ebe610d455bab466e200.conf: hGetContents: invalid argument (invalid UTF-8 byte sequence) (Here's the full output, if you're interested: http://paste.ubuntu.com/566475/ ) This still happens after apt-get clean and apt-get update. My searching around has not really helped me understand what's going on, except that it might be caused by a mismatch in locale. So, here's the output of locale too: LANG=en_US.utf8 LANGUAGE=en_US:en LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL= Any ideas? Additional background: this all seems very strange to me, because I used to have ghc6 installed correctly--I use XMonad as my main window manager most of the time. I tried to install haskell-platform (through apt), which failed and told me that there was something wrong with ghc6, and so I reinstalled ghc6 and began to get the above error message.

    Read the article

  • Windows Displays Double the Actual Installed Physical Memory

    - by Andrew Barber
    I have a server I've installed Windows Web 2008 R2 on, which is reporting that I have double the physical memory installed as is actually the case. In msinfo32 "Installed Physical Memory" shows as 2x what ever the actual installed amount is, though "Total Physical Memory" shows the correct amount. The "System" info window shows installed memory as 2x, with the correct amount in parenthesis listed as the "usable" amount). This server mistakenly had Windows Web 2008 (32-bit) installed on it just previously, and that OS also reported the same faulty information as Win2K8R2 is reporting. BIOS reports the correct amount, memtest was run on this server before installation, and a previous Windows 2000 instance installed on this system also reported the correct amount, as I recall. Server operation seems to be fine as well (it's only trying to use the correct amount of memory). The server is a generic pizzabox running on a SuperMicro X6DVL-EG with dual Xeon-3.2's. Memory installed are 4 matching mt18vddf12872g-335c3 sticks (1GB pc2700 DDR ECC REG cl2.5) This behavior occurs whether two or all four are installed. So, has anyone seen something like this before? Have any idea about what's causing it, and how I should be concerned about it? Everything else seems good so far, and I'll be upgrading the memory before putting the server into service, but I don't want to spend too much time/money/effort on the server if it's got something odd going wrong here. UPDATE: There was a question I ran into regarding memory sparing in the BIOS and a possible (buggy) effect thereof; however, flipping that bit back and forth in the BIOS revealed that isn't the issue. Still flummoxed a bit about this one, though I still have seen no negative impacts. Post-Answer Update (January 13, 2011): Upgrading the system with new, larger memory has fixed this issue.

    Read the article

  • How to revert to Eclipse's old behavior when double-clicking on frames or windows, in Juno?

    - by mattquiros
    I find Eclipse Juno very counter-intuitive. In my workspace, I used to just have the package view at the left, my code on the right, and whenever I'm printing something in the console, that's only when the console frame shows up from the bottom right. When you double click on a particular window, it either only goes fullscreen within Eclipse, or back to its original size together with the size of the other frames. In Juno, however, it seems that frames are put on layers on top of each other. When I print something to the console, the output frame only shows up as a small, useless square to my right. When I double-click, it goes full screen. When I double-click again, it occupies the full right frame, hiding my code. I double-click again and it goes full screen, then it double-clicks before it shares the right frame with my code on top of it. Then when I minimize it, it goes to the side. Any way to go back to the good old days of Eclipse? Thanks.

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

  • Floating Point Arithmetic - Modulo Operator on Double Type

    - by CrimsonX
    So I'm trying to figure out why the modulo operator is returning such a large unusual value. If I have the code: double result = 1.0d % 0.1d; it will give a result of 0.09999999999999995. I would expect a value of 0 Note this problem doesn't exist using the dividing operator - double result = 1.0d / 0.1d; will give a result of 10.0, meaning that the remainder should be 0. Let me be clear: I'm not surprised that an error exists, I'm surprised that the error is so darn large compared to the numbers at play. 0.0999 ~= 0.1 and 0.1 is on the same order of magnitude as 0.1d and only one order of magnitude away from 1.0d. Its not like you can compare it to a double.epsilon, or say "its equal if its < 0.00001 difference". I've read up on this topic on StackOverflow, in the following posts one two three, amongst others. Can anyone suggest explain why this error is so large? Any any suggestions to avoid running into the problems in the future (I know I could use decimal instead but I'm concerned about the performance of that).

    Read the article

  • What is the Proper approach for Constructing a PhysicalAddress object from Byte Array

    - by Paul Farry
    I'm trying to understand what the correct approach for a constructor that accepts a Byte Array with regard to how it stores it's data (specifically with PhysicalAddress) I have an array of 6 bytes (theAddress) that is constructed once. I have a source array of 18bytes (theAddresses) that is loaded from a TCP Connection. I then copy the 6bytes from theAddress+offset into theAddress and construct the PhysicalAddress from it. Problem is that the PhysicalAddress just stores the Reference to the array that was passed in. Therefore if you subsequently check the addresses they only ever point to the last address that was copied in. When I took a look inside the PhysicalAddress with reflector it's easy to see what's going on. public PhysicalAddress(byte[] address) { this.changed = true; this.address = address; } Now I know this can be solved by creating theAddress array on each pass, but I wanted to find out what really is the best practice for this. Should the constructor of an object that accepts a byte array create it's own private Variable for holding the data and copy it from the original Should it just hold the reference to what was passed in. Should I just created theAddress on each pass in the loop

    Read the article

  • Double null-terminated string

    - by wengseng
    I need to format a string to be double null-terminated string in order to use SHFileOperation. Interesting part is i found one of the following working, but not both: // Example 1 CString szDir(_T("D:\\Test")); szDir = szDir + _T('\0') + _T('\0'); // Example 2 CString szDir(_T("D:\\Test")); szDir = szDir + _T("\0\0"); //Delete folder SHFILEOPSTRUCT fileop; fileop.hwnd = NULL; // no status display fileop.wFunc = FO_DELETE; // delete operation fileop.pFrom = szDir; // source file name as double null terminated string fileop.pTo = NULL; // no destination needed fileop.fFlags = FOF_NOCONFIRMATION|FOF_SILENT; // do not prompt the user fileop.fAnyOperationsAborted = FALSE; fileop.lpszProgressTitle = NULL; fileop.hNameMappings = NULL; int ret = SHFileOperation(&fileop); Does anyone has idea on this? Is there other way to append double-terminated string?

    Read the article

  • KSH: Variables containing double quotes

    - by nitrobass24
    I have a string called STRING1 that could contain double quotes. I am echoing the string through sed to pull out puntuation then sending to array to count certain words. The problem is I cannot echo variables through double quotes to sed. I am crawling our filesystems looking for files that use FTP commands. I grep each file for "FTP" STRING1=`grep -i ftp $filename` If you echo $STRING1 this is the output (just one example) myserver> echo "Your file `basename $1` is too large to e-mail. You must ftp the file to BMC tech support. \c" echo "Then, ftp it to ftp.bmc.com with the user name 'anonymous.' \c" echo "When the ftp is successful, notify technical support by phone (800-537-1813) or by e-mail ([email protected].)" Then I have this code STRING2=`echo $STRING1|sed 's/[^a-zA-Z0-9]/ /g'` I have tried double quoting $STRING1 like STRING2=`echo "$STRING1"|sed 's/[^a-zA-Z0-9]/ /g'` But that does not work. Single Qoutes, just sends $STRING1 as the string to sed...so that did not work. What else can I do here?

    Read the article

  • java hashmap array to double array

    - by Tweety
    Hi, I declared LinkedHashMap<String, float[]> and now I want to convert float[] values into double[][]. I am using following code. LinkedHashMap<String, float[]> fData; double data[][] = null; Iterator<String> iter = fData.keySet().iterator(); int i = 0; while (iter.hasNext()) { faName = iter.next(); tValue = fData.get(faName); //data = new double[fData.size()][tValue.length]; for (int j = 0; j < tValue.length; j++) { data[i][j] = tValue[j]; } i++; } When I try to print data System.out.println(Arrays.deepToString(data)); it doesn't show the data :( I tried to debug my code and i figured out that I have to initialize data outside the while loop but then I don't know the array dimensions :( How to solve it? Thanks

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • Uncompiled WCF on IIS7: The type could not be found

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • Deploying WCF Tutorial App on IIS7: "The type could not be found"

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. Here is a picture of what I see in IIS Manager, in case my words don't do it justice: This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Interestingly enough, if I click on View Applications, it seems like the virtual directory is an application (see image below)... although, as per the error message above, it doesn't work. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config <dir: App_Code> Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >