Search Results

Search found 2844 results on 114 pages for 'iterative conversion'.

Page 102/114 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • What /else/ causes this?

    - by Mordachai
    MFC Toolbox Library.lib(SimpleFileIO.obj) : error LNK2005: _wcsnlen already defined in libcmtd.lib(wcslen_s.obj) fatal error LNK1169: one or more multiply defined symbols found This is driving me nuts. Normally, one would get this if the various projects that are a part of their solution do not agree on which CRT to use (single threaded, multi-threaded, release or debug). However, I have been over this thing about 500 times now, and they all agree. Background: this is a VS 2010 project just converted from VS 2008. MFC Toolbox Library.lib is set to compile as a static library, using /MTd, as is the target .exe I am trying to compile in this solution. Further, the solution that this is being converted from (VS 2008) already compiles & links properly!!! So it's not like that there is a disagreement between the two .vcproj's - or at least there wasn't before the conversion. Furthermore, the MFC Toolbox Library is used by about 25 other projects in another solution - and in that solution (Master Build English) it compiles & links against those other projects without complaint in both debug and release targets. I have just spent the last hour going over every single project property for this target project (Cimex Header Viewer) vs. several different target exe projects in Master Build English solution - and I cannot find a difference. They appear to be identical, excepting that they're different names. I've tried doing a clean & build all. I'm simply out of ideas. Does anyone have a thought on what else I might investigate??? I think I'm ready to start chewing glass. :(

    Read the article

  • C++ iterator and const_iterator problem for own container class

    - by BaCh
    Hi there, I'm writing an own container class and have run into a problem I can't get my head around. Here's the bare-bone sample that shows the problem. It consists of a container class and two test classes: one test class using a std:vector which compiles nicely and the second test class which tries to use my own container class in exact the same way but fails miserably to compile. #include <vector> #include <algorithm> #include <iterator> using namespace std; template <typename T> class MyContainer { public: class iterator { public: typedef iterator self_type; inline iterator() { } }; class const_iterator { public: typedef const_iterator self_type; inline const_iterator() { } }; iterator begin() { return iterator(); } const_iterator begin() const { return const_iterator(); } }; // This one compiles ok, using std::vector class TestClassVector { public: void test() { vector<int>::const_iterator I=myc.begin(); } private: vector<int> myc; }; // this one fails to compile. Why? class TestClassMyContainer { public: void test(){ MyContainer<int>::const_iterator I=myc.begin(); } private: MyContainer<int> myc; }; int main(int argc, char ** argv) { return 0; } gcc tells me: test2.C: In member function ‘void TestClassMyContainer::test()’: test2.C:51: error: conversion from ‘MyContainer::iterator’ to non-scalar type ‘MyContainer::const_iterator’ requested I'm not sure where and why the compiler wants to convert an iterator to a const_iterator for my own class but not for the STL vector class. What am I doing wrong?

    Read the article

  • How to negate a predicate function using operator ! in C++?

    - by Chan
    Hi, I want to erase all the elements that do not satisfy a criterion. For example: delete all the characters in a string that are not digit. My solution using boost::is_digit worked well. struct my_is_digit { bool operator()( char c ) const { return c >= '0' && c <= '9'; } }; int main() { string s( "1a2b3c4d" ); s.erase( remove_if( s.begin(), s.end(), !boost::is_digit() ), s.end() ); s.erase( remove_if( s.begin(), s.end(), !my_is_digit() ), s.end() ); cout << s << endl; return 0; } Then I tried my own version, the compiler complained :( error C2675: unary '!' : 'my_is_digit' does not define this operator or a conversion to a type acceptable to the predefined operator I could use not1() adapter, however I still think the operator ! is more meaningful in my current context. How could I implement such a ! like boost::is_digit() ? Any idea? Thanks, Chan Nguyen

    Read the article

  • How to cast C struct just another struct type if their memory size are equal?

    - by Eonil
    I have 2 matrix structs means equal data but have different form like these: // Matrix type 1. typedef float Scalar; typedef struct { Scalar e[4]; } Vector; typedef struct { Vector e[4]; } Matrix; // Matrix type 2 (you may know this if you're iPhone developer) struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; typedef struct CATransform3D CATransform3D; Their memory size are equal. So I believe there is a way to convert these types without any pointer operations or copy like this: // Implemented from external lib. CATransform3D CATransform3DMakeScale (CGFloat sx, CGFloat sy, CGFloat sz); Matrix m = (Matrix)CATransform3DMakeScale ( 1, 2, 3 ); Is this possible? Currently compiler prints an "error: conversion to non-scalar type requested" message.

    Read the article

  • Convert Virtual Key Code to unicode string

    - by Joshua Weinberg
    I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion? My current code is below. TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource(); CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData); const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr); if(keyboardLayout) { UInt32 deadKeyState = 0; UniCharCount maxStringLength = 255; UniCharCount actualStringLength = 0; UniChar unicodeString[maxStringLength]; OSStatus status = UCKeyTranslate(keyboardLayout, keyCode, kUCKeyActionDown, 0, LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxStringLength, &actualStringLength, unicodeString); if(actualStringLength > 0 && status == noErr) return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString]; }

    Read the article

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • Storing "binary" data type in C program

    - by puchu
    I need to create a program that converts one number system to other number systems. I used itoa in Windows (Dev C++) and my only problem is that I do not know how to convert binary numbers to other number systems. All the other number systems conversion work accordingly. Does this involve something like storing the input to be converted using %? Here is a snippet of my work: case 2: { printf("\nEnter a binary number: "); scanf("%d", &num); itoa(num,buffer,8); printf("\nOctal %s",buffer); itoa(num,buffer,10); printf("\nDecimal %s",buffer); itoa(num,buffer,16); printf("\nHexadecimal %s \n",buffer); break; } For decimal I used %d, for octal I used %o and for hexadecimal I used %x. What could be the correct one for binary? Thanks for future answers!

    Read the article

  • Is there any need for me to use wstring in the following case

    - by Yan Cheng CHEOK
    Currently, I am developing an app for a China customer. China customer are mostly switch to GB2312 language in their OS encoding. I need to write a text file, which will be encoded using GB2312. I use std::ofstream file I compile my application under MBCS mode, not unicode. I use the following code, to convert CString to std::string, and write it to file using ofstream std::string Utils::ToString(CString& cString) { /* Will not work correctly, if we are compiled under unicode mode. */ return (LPCTSTR)cString; } To my surprise. It just works. I thought I need to at least make use of wstring. I try to do some investigation. Here is the MBCS.txt generated. I try to print a single character named ? (its value is 0xBDC5) When I use CString to carry this character, its length is 2. When I use Utils::ToString to perform conversion to std::string, the returned string length is 2. I write to file using std::ofstream My question is : When I exam MBCS.txt using a hex editor, the value is displayed as BD (LSB) and C5 (MSB). But I am using little endian machine. Isn't hex editor should show me C5 (LSB) and BD (MSB)? I check from wikipedia. GB2312 seems doesn't specific endianness. It seems that using std::string + CString just work fine for my case. May I know in what case, the above methodology will not work? and when I should start to use wstring?

    Read the article

  • Can I transform this asynchronous java network API into a monadic representation (or something else

    - by AlecZorab
    I've been given a java api for connecting to and communicating over a proprietary bus using a callback based style. I'm currently implementing a proof-of-concept application in scala, and I'm trying to work out how I might produce a slightly more idiomatic scala interface. A typical (simplified) application might look something like this in Java: DataType type = new DataType(); BusConnector con = new BusConnector(); con.waitForData(type.getClass()).addListener(new IListener<DataType>() { public void onEvent(DataType t) { //some stuff happens in here, and then we need some more data con.waitForData(anotherType.getClass()).addListener(new IListener<anotherType>() { public void onEvent(anotherType t) { //we do more stuff in here, and so on } }); } }); //now we've got the behaviours set up we call con.start(); In scala I can obviously define an implicit conversion from (T = Unit) into an IListener, which certainly makes things a bit simpler to read: implicit def func2Ilistener[T](f: (T => Unit)) : IListener[T] = new IListener[T]{ def onEvent(t:T) = f } val con = new BusConnector con.waitForData(DataType.getClass).addListener( (d:DataType) => { //some stuff, then another wait for stuff con.waitForData(OtherType.getClass).addListener( (o:OtherType) => { //etc }) }) Looking at this reminded me of both scalaz promises and f# async workflows. My question is this: Can I convert this into either a for comprehension or something similarly idiomatic (I feel like this should map to actors reasonably well too) Ideally I'd like to see something like: for( d <- con.waitForData(DataType.getClass); val _ = doSomethingWith(d); o <- con.waitForData(OtherType.getClass) //etc )

    Read the article

  • Timestamps and Intervals: NUMTOYMINTERVAL SYSTDATE CALCULATION SQL QUERY

    - by MeachamRob
    I am working on a homework problem, I'm close but need some help with a data conversion I think. Or sysdate - start_date calculation The question is: Using the EX schema, write a SELECT statement that retrieves the date_id and start_date from the Date_Sample table (format below), followed by a column named Years_and_Months_Since_Start that uses an interval function to retrieve the number of years and months that have elapsed between the start_date and the sysdate. (Your values will vary based on the date you do this lab.) Display only the records with start dates having the month and day equal to Feb 28 (of any year). DATE_ID START_DATE YEARS_AND_MONTHS_SINCE_START 2 Sunday , February 28, 1999 13-8 4 Monday , February 28, 2005 7-8 5 Tuesday , February 28, 2006 6-8 Our EX schema that refers to this question is simply a Date_Sample Table with two columns: DATE_ID NUMBER NOT Null START_DATE DATE I Have written this code: SELECT date_id, TO_CHAR(start_date, 'Day, MONTH DD, YYYY') AS start_date , NUMTOYMINTERVAL((SYSDATE - start_date), 'YEAR') AS years_and_months_since_start FROM date_sample WHERE TO_CHAR(start_date, 'MM/DD') = '02/28'; But my Years and months since start column is not working properly. It's getting very high numbers for years and months when the date calculated is from 1999-ish. ie, it should be 13-8 and I'm getting 5027-2 so I know it's not correct. I used NUMTOYMINTERVAL, which should be correct, but don't think the sysdate-start_date is working. Data Type for start_date is simply date. I tried ROUND but maybe need some help to get it right. Something is wrong with my calculation and trying to figure out how to get the correct interval there. Not sure if I have provided enough information to everyone but I will let you know if I figure it out before you do. It's a question from Murach's Oracle and SQL/PL book, chapter 17 if anyone else is trying to learn that chapter. Page 559.

    Read the article

  • Dilemma with two types and operator +

    - by user35443
    I have small problem with operators. I have this code: public class A { public string Name { get; set; } public A() { } public A(string Name) { this.Name = Name; } public static implicit operator B(A a) { return new B(a.Name); } public static A operator+(A a, A b) { return new A(a.Name + " " + b.Name); } } public class B { public string Name { get; set; } public B() { } public B(string Name) { this.Name = Name; } public static implicit operator A(B b) { return new A(b.Name); } public static B operator +(B b, B a) { return new B(b.Name + " " + a.Name); } } Now I want to know, which's conversion operator will be called and which's addition operator will be called in this operation: new A("a") + new B("b"); Will it be operator of A, or of B? (Or both?) Thanks....

    Read the article

  • how do i make the app take correct input..?

    - by user1824343
    This is my windows app's one layout which converts Celsius to Fahrenheit. The problem is that when I try to input the temperature it shows some junk(for eg: if i enter '3' it displayin '3.0000009') and sometimes its even showing stack overflow exception. The output is also not shown properly : cel.text is the textbox for celsius. fahre.text is the textbox for fahrenheit. namespace PanoramaApp1 { public partial class FahretoCel : PhoneApplicationPage { public FahretoCel() { InitializeComponent(); } private void fahre_TextChanged(object sender, TextChangedEventArgs e) { if (fahre.Text != "") { try { double F = Convert.ToDouble(fahre.Text); cel.Text = "" + ((5.0/9.0) * (F - 32)) ; //this is conversion expression } catch (FormatException) { fahre.Text = ""; cel.Text = ""; } } else { cel.Text = ""; } } private void cel_TextChanged(object sender, TextChangedEventArgs e) { if (cel.Text != "") { try { Double c = Convert.ToDouble(cel.Text); fahre.Text = "" + ((c *(9.0 / 5.0 )) + 32); } catch (FormatException) { fahre.Text = ""; cel.Text = ""; } } else { fahre.Text = ""; } } } }

    Read the article

  • C++ private inheritance and static members/types

    - by WearyMonkey
    I am trying to stop a class from being able to convert its 'this' pointer into a pointer of one of its interfaces. I do this by using private inheritance via a middle proxy class. The problem is that I find private inheritance makes all public static members and types of the base class inaccessible to all classes under the inheriting class in the hierarchy. class Base { public: enum Enum { value }; }; class Middle : private Base { }; class Child : public Middle { public: void Method() { Base::Enum e = Base::value; // doesn't compile BAD! Base* base = this; // doesn't compile GOOD! } }; I've tried this in both VS2008 (the required version) and VS2010, neither work. Can anyone think of a workaround? Or a different approach to stopping the conversion? Also I am curios of the behavior, is it just a side effect of the compiler implementation, or is it by design? If by design, then why? I always thought of private inheritance to mean that nobody knows Middle inherits from Base. However, the exhibited behavior implies private inheritance means a lot more than that, in-fact Child has less access to Base than any namespace not in the class hierarchy!

    Read the article

  • Attempted GCF app for Android

    - by Aaron
    I am new to Android and am trying to create a very basic app that calculates and displays the GCF of two numbers entered by the user. Here is a copy of my GCF.java: package com.example.GCF; import java.util.Arrays; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class GCF extends Activity { private TextView mAnswer; private EditText mA, mB; private Button ok; private String A, B; private int iA, iB; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mA = (EditText) findViewById(R.id.entry); mB = (EditText) findViewById(R.id.entry1); ok = (Button) findViewById(R.id.ok); mAnswer = (TextView) findViewById(R.id.answer1); ok.setOnClickListener(new OnClickListener() { public void onClick(View v) { A = mA.getText().toString(); B = mB.getText().toString(); } }); // the String to int conversion happens here iA = Integer.parseInt(A.trim()); iB = Integer.parseInt(B.trim()); while (iA != iB) { int[] nums={ iA, iB, Math.abs(iA-iB) }; Arrays.sort(nums); iA=nums[0]; iB=nums[1]; } updateDisplay(); } private void updateDisplay() { mAnswer.setText( new StringBuilder().append(iA)); } } Any Suggestions? Thank you!

    Read the article

  • Python to C/C++ const char question

    - by tsukemonoki
    I am extending Python with some C++ code. One of the functions I'm using has the following signature: int PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict, char *format, char **kwlist, ...); (link: http://docs.python.org/release/1.5.2p2/ext/parseTupleAndKeywords.html) The parameter of interest is kwlist. In the link above, examples on how to use this function are given. In the examples, kwlist looks like: static char *kwlist[] = {"voltage", "state", "action", "type", NULL}; When I compile this using g++, I get the warning: warning: deprecated conversion from string constant to ‘char*’ So, I can change the static char* to a static const char*. Unfortunately, I can't change the Python code. So with this change, I get a different compilation error (can't convert char** to const char**). Based on what I've read here, I can turn on compiler flags to ignore the warning or I can cast each of the constant strings in the definition of kwlist to char *. Currently, I'm doing the latter. What are other solutions? Sorry if this question has been asked before. I'm new.

    Read the article

  • Nested bind expressions

    - by user328543
    This is a followup question to my previous question. #include <functional> int foo(void) {return 2;} class bar { public: int operator() (void) {return 3;}; int something(int a) {return a;}; }; template <class C> auto func(C&& c) -> decltype(c()) { return c(); } template <class C> int doit(C&& c) { return c();} template <class C> void func_wrapper(C&& c) { func( std::bind(doit<C>, std::forward<C>(c)) ); } int main(int argc, char* argv[]) { // call with a function pointer func(foo); func_wrapper(foo); // error // call with a member function bar b; func(b); func_wrapper(b); // call with a bind expression func(std::bind(&bar::something, b, 42)); func_wrapper(std::bind(&bar::something, b, 42)); // error // call with a lambda expression func( [](void)->int {return 42;} ); func_wrapper( [](void)->int {return 42;} ); return 0; } I'm getting a compile errors deep in the C++ headers: functional:1137: error: invalid initialization of reference of type ‘int (&)()’ from expression of type ‘int (*)()’ functional:1137: error: conversion from ‘int’ to non-scalar type ‘std::_Bind(bar, int)’ requested func_wrapper(foo) is supposed to execute func(doit(foo)). In the real code it packages the function for a thread to execute. func would the function executed by the other thread, doit sits in between to check for unhandled exceptions and to clean up. But the additional bind in func_wrapper messes things up...

    Read the article

  • Cannot implicitly convert type ...

    - by Newbie
    I have the following function public Dictionary<DateTime, object> GetAttributeList( EnumFactorType attributeType ,Thomson.Financial.Vestek.Util.DateRange dateRange) { DateTime startDate = dateRange.StartDate; DateTime endDate = dateRange.EndDate; return (( //Step 1: Iterate over the attribute list and filter the records by // the supplied attribute type from assetAttribute in AttributeCollection where assetAttribute.AttributeType.Equals(attributeType) //Step2:Assign the TimeSeriesData collection into a temporary variable let timeSeriesList = assetAttribute.TimeSeriesData //Step 3: Iterate over the TimeSeriesData list and filter the records by // the supplied date from timeSeries in timeSeriesList.ToList() where timeSeries.Key >= startDate && timeSeries.Key <= endDate //Finally build the needed collection select new AssetAttribute() { TimeSeriesData = PopulateTimeSeriesData(timeSeries.Key, timeSeries.Value) }).ToList<AssetAttribute>().Select(i => i.TimeSeriesData)); } private Dictionary<DateTime, object> PopulateTimeSeriesData(DateTime dateTime, object value) { Dictionary<DateTime, object> timeSeriesData = new Dictionary<DateTime, object>(); timeSeriesData.Add(dateTime, value); return timeSeriesData; } Error:Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Collections.Generic.Dictionary'. An explicit conversion exists (are you missing a cast?) Using C#3.0 Please help

    Read the article

  • Heapsort not working in Python for list of strings using heapq module

    - by VSN
    I was reading the python 2.7 documentation when I came across the heapq module. I was interested in the heapify() and the heappop() methods. So, I decided to write a simple heapsort program for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = map (int, user_input.split(",")) new_data = [] for i in range(len(data)): heapify(data) new_data.append(heappop(data)) print new_data This worked like a charm. To make it more interesting, I thought I would take away the integer conversion and leave it as a string. Logically, it should make no difference and the code should work as it did for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = user_input.split(",") new_data = [] for i in range(len(data)): heapify(data) print data new_data.append(heappop(data)) print new_data Note: I added a print statement in the for loop to see the heapified list. Here's the output when I ran the script: `$ python heapsort.py Enter numbers to be sorted: 4, 3, 1, 9, 6, 2 [' 1', ' 3', ' 2', ' 9', ' 6', '4'] [' 2', ' 3', '4', ' 9', ' 6'] [' 3', ' 6', '4', ' 9'] [' 6', ' 9', '4'] [' 9', '4'] ['4'] [' 1', ' 2', ' 3', ' 6', ' 9', '4']` The reasoning I applied was that since the strings are being compared, the tree should be the same if they were numbers. As is evident, the heapify didn't work correctly after the third iteration. Could someone help me figure out if I am missing something here? I'm running Python 2.4.5 on RedHat 3.4.6-9. Thanks, VSN

    Read the article

  • User Defined Conversions in C++

    - by wash
    Recently, I was browsing through my copy of the C++ Pocket Reference from O'Reilly Media, and I was surprised when I came across a brief section and example regarding user-defined conversion for user-defined types: #include <iostream> class account { private: double balance; public: account (double b) { balance = b; } operator double (void) { return balance; } }; int main (void) { account acc(100.0); double balance = acc; std::cout << balance << std::endl; return 0; } I've been programming in C++ for awhile, and this is the first time I've ever seen this sort of operator overloading. The book's description of this subject is somewhat brief, leaving me with a few unanswered questions about this feature: Is this a particularly obscure feature? As I said, I've been programming in C++ for awhile and this is the first time I've ever come across this. I haven't had much luck finding more in-depth material regarding this. Is this relatively portable? (I'm compiling on GCC 4.1) Can user-defined conversions to user defined types be done? e.g. operator std::string () { /* code */ }

    Read the article

  • Cannot understand the behaviour of dotnet compiler while instantiating a class thru interface(C#)

    - by Newbie
    I have a class that impelemnts an interface. The interface is public interface IRiskFactory { void StartService(); void StopService(); } The class that implements the interface is public class RiskFactoryService : IRiskFactory { } Now I have a console application and one window service. From the console application if I write the following code static void Main(string[] args) { IRiskFactory objIRiskFactory = new RiskFactoryService(); objIRiskFactory.StartService(); Console.ReadLine(); objIRiskFactory.StopService(); } It is working fine. However, when I mwrite the same piece of code in Window service public partial class RiskFactoryService : ServiceBase { IRiskFactory objIRiskFactory = null; public RiskFactoryService() { InitializeComponent(); objIRiskFactory = new RiskFactoryService(); <- ERROR } /// <summary> /// Starts the service /// </summary> /// <param name="args"></param> protected override void OnStart(string[] args) { objIRiskFactory.StartService(); } /// <summary> /// Stops the service /// </summary> protected override void OnStop() { objIRiskFactory.StopService(); } } It throws error: Cannot implicitly convert type 'RiskFactoryService' to 'IRiskFactory'. An explicit conversion exists (are you missing a cast?) When I type casted to the interface type, it started working objIRiskFactory = (IRiskFactory)new RiskFactoryService(); My question is why so? Thanks.(C#)

    Read the article

  • Proc causing a random TypeError

    - by go____yourself
    I'm refactoring some code and this proc is causing an error randomly and I don't know why or how to debug it... Any ideas? New code with proc defense_moves, offense_moves = [], [] determine_move = ->move,side,i { side << move.count(move[i]) } defense.size.times { |i| determine_move.(defense, defense_moves, i) } offense.size.times { |i| determine_move.(offense, offense_moves, i) } dm = defense[defense_moves.index(defense_moves.max)].nil? ? [0] : defense[defense_moves.index(defense_moves.max)] om = offense[offense_moves.index(offense_moves.max)].nil? ? [0] : offense[offense_moves.index(offense_moves.max)] Original code: d = 0 defense_moves = [] loop do defense_moves << defense.count(defense[d]) break if defense.count(defense[d]).zero? d += 1 end o = 0 offense_moves = [] loop do offense_moves << offense.count(offense[o]) break if offense.count(offense[o]).zero? o += 1 end dm = defense[defense_moves.index(defense_moves.max)].nil? ? [0] : defense[defense_moves.index(defense_moves.max)] om = offense[offense_moves.index(offense_moves.max)].nil? ? [0] : offense[offense_moves.index(offense_moves.max)] TypeError ttt2.rb:95:in `[]': no implicit conversion from nil to integer (TypeError) from ttt2.rb:95:in `computer_make_move' from ttt2.rb:133:in `draw_board' from ttt2.rb:24:in `place' from ttt2.rb:209:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:234:in `<main>'

    Read the article

  • ASP.NET MVC3 ValueProvider drops string input to a double property

    - by Daniel Koverman
    I'm attempting to validate the input of a text box which corresponds to a property of type double in my model. If the user inputs "foo" I want to know about it so I can display an error. However, the ValueProvider is dropping the value silently (no errors are added to the ModelState). In a normal submission, I fill in "2" for the text box corresponding to myDouble and submit the form. Inspecting controllerContext.HttpContext.Request.Form shows that myDouble=2, among other correct inputs. bindingContext.ValueProvider.GetValue("myDouble") == 2, as expected. The bindingContext.ModelState.Count == 6 and bindingContext.ModelState["myDouble"].Errors.Count == 0. Everything is good and the model binds as expected. Then I fill in "foo" for the text box corresponding to myDouble and submitted the form. Inspecting controllerContext.HttpContext.Request.Form shows that myDouble=foo, which is what I expected. However, bindingContext.ValueProvider.GetValue("myDouble") == null and bindingContext.ModelState.Count == 5 (The exact number isn't important, but it's one less than before). Looking at the ValueProvider, is as if myDouble was never submitted and the model binding occurs as if it wasn't. This makes it difficult to differentiate between a bad input and no input. Is this the expected behavior of ValueProvider? Is there a way to get ValueProvider to report when conversion fails without implementing a custom ValueProvider? Thanks!

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

  • Pass object from webserver to client

    - by user362914
    I developed a C# web application that calls a web-service which returns a base64 encoded array (PDF file). I then convert that array into a UCOMIStream object (I know it is obsolete, but the DLL that I am using requires it as a parameter). I use the following code to do the conversion which works perfectly. I can pass this object to the DLL so that I can print the PDF. This works great on the Webserver, but the requirement is to print it locally. Byte[] bBuffer = statementOut.statementcycle.statementdata.content; int size = bBuffer.Length; IntPtr mem = Marshal.AllocHGlobal(size); Marshal.Copy(bBuffer, 0, mem, size); // Create an OLE Stream object. System.Runtime.InteropServices.UCOMIStream str; //obsolete but the createstreamonhglobal outputs it CreateStreamOnHGlobal(mem, true, out str); The DLL resides on the client so I am able to use ActiveX to create the object using javascript and/or VBscript;however, I have not been able to figure out how to get the stream object to the client to pass to the DLL. How can this be achieved?

    Read the article

  • Writing a generic function that can take a Writer as well as an OutputStream

    - by ebruchez
    I wrote a couple of functions that look like this: def myWrite(os: OutputStream) = {} def myWrite(w: Writer) = {} Now both are very similar and I thought I would try to write a single parametrized version of the function. I started with a type with the two methods that are common in the Java OutputStream and Writer: type Writable[T] = { def close() : Unit def write(cbuf: Array[T], off: Int, len: Int): Unit } One issue is that OutputStream writes Byte and Writer writes Char, so I parametrized the type with T. Then I write my function: def myWrite[T, A[T] <: Writable[T]](out: A[T]) = {} and try to use it: val w = new java.io.StringWriter() myWrite(w) Result: <console>:9: error: type mismatch; found : java.io.StringWriter required: ?A[ ?T ] Note that implicit conversions are not applicable because they are ambiguous: both method any2ArrowAssoc in object Predef of type [A](x: A)ArrowAssoc[A] and method any2Ensuring in object Predef of type [A](x: A)Ensuring[A] are possible conversion functions from java.io.StringWriter to ?A[ ?T ] myWrite(w) I tried a few other combinations of types and parameters, to no avail so far. My question is whether there is a way of achieving this at all, and if so how. (Note that the implementation of myWrite will need, internally, to know the type T that parametrizes the write() method, because it needs to create a buffer as in new ArrayT.)

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >