Search Results

Search found 5377 results on 216 pages for 'explicit cast operator'.

Page 89/216 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Generic InBetween Function.

    - by Luiscencio
    I am tired of writing x > min && x < max so i wawnt to write a simple function but I am not sure if I am doing it right... actually I am not cuz I get an error: bool inBetween<T>(T x, T min, T max) where T:IComparable { return (x > min && x < max); } errors: Operator '>' cannot be applied to operands of type 'T' and 'T' Operator '<' cannot be applied to operands of type 'T' and 'T' may I have a bad understanding of the where part in the function declaring note: for those who are going to tell me that I will be writing more code than before... think on readability =) any help will be appreciated EDIT deleted cuz it was resolved =) ANOTHER EDIT so after some headache I came out with this (ummm) thing following @Jay Idea of extreme readability: public static class test { public static comparision Between<T>(this T a,T b) where T : IComparable { var ttt = new comparision(); ttt.init(a); ttt.result = a.CompareTo(b) > 0; return ttt; } public static bool And<T>(this comparision state, T c) where T : IComparable { return state.a.CompareTo(c) < 0 && state.result; } public class comparision { public IComparable a; public bool result; public void init<T>(T ia) where T : IComparable { a = ia; } } } now you can compare anything with extreme readability =) what do you think.. I am no performance guru so any tweaks are welcome

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • Unable to send '+' through AJAX post?

    - by Harish Kurup
    I am using Ajax POST method to send data, but i am not able to send '+'(operator to the server i.e if i want to send 1+ or 20k+ it will only send 1 or 20k..just wipe out '+') HTML code goes here.. <form method='post' onsubmit='return false;' action='#'> <input type='input' name='salary' id='salary' /> <input type='submit' onclick='submitVal();' /> </form> and javascript code goes here, function submitVal() { var sal=document.getElementById("salary").value; alert(sal); var request=getHttpRequest(); request.open('post','updateSal.php',false); request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("sal="+sal); if(request.readyState == 4) { alert("update"); } } function getHttpRequest() { var request=false; if(window.XMLHttpRequest) { request=new XMLHttpRequest(); } else if(window.ActiveXObject) { try { request=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { request=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) { request=false; } } } return request; } in the function submitVal() it first alert's the salary value as it is(if 1+ then alerts 1+), but when it is posted it just post's value without '+' operator which is needed... is it any problem with query string, as the PHP backend code is working fine...

    Read the article

  • Mutual class instances in C++

    - by SepiDev
    Hi guys. What is the issue with this code? Here we have two files: classA.h and classB.h classA.h: #ifndef _class_a_h_ #define _class_a_h_ #include "classB.h" class B; //???? class A { public: A() { ptr_b = new B(); //???? } virtual ~A() { if(ptr_b) delete ptr_b; //???? num_a = 0; } int num_a; B* ptr_b; //???? }; #endif //_class_a_h_ classB.h: #ifndef _class_b_h_ #define _class_b_h_ #include "classA.h" class A; //???? class B { public: B() { ptr_a = new A(); //???? num_b = 0; } virtual ~B() { if(ptr_a) delete ptr_a; //???? } int num_b; A* ptr_a; //???? }; #endif //_class_b_h_ when I try to compile it, the compiler (g++) says: classB.h: In constructor ‘B::B()’: classB.h:12: error: invalid use of incomplete type ‘struct A’ classB.h:6: error: forward declaration of ‘struct A’ classB.h: In destructor ‘virtual B::~B()’: classB.h:16: warning: possible problem detected in invocation of delete operator: classB.h:16: warning: invalid use of incomplete type ‘struct A’ classB.h:6: warning: forward declaration of ‘struct A’ classB.h:16: note: neither the destructor nor the class-specific operator delete will be called, even if they are declared when the class is defined.

    Read the article

  • Implicit Type Conversions in Reflection

    - by bradhe
    So I've written a quick bit of code to help quickly convert between business objects and view models. Not to pimp my own blog, but you can find the details here if you're interested or need to know. One issue I've run in to is that I have a custom collection type, ProductCollection, and I need to turn that in to a string[] in on my model. Obviously, since there is no default implicit cast, I'm getting an exception in my contract converter. So, I thought I would write the next little bit of code and that should solve the problem: public static implicit operator string[](ProductCollection collection) { var list = new List<string>(); foreach (var product in collection) { if (product.Id == null) { list.Add(null); } else { list.Add(product.Id.ToString()); } } return list.ToArray(); } However, it still fails with the same cast exception. I'm wondering if it has something to do with being in reflection? If so, is there anything that I can do here?? I'm open to architectural solutions, too!

    Read the article

  • How would I code a complex formula parser manually?

    - by StormianRootSolver
    Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway". What I want to accomplish ist: a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me) c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser... e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack So, what do you think?

    Read the article

  • Accept templated parameter of stl_container_type<string>::iterator

    - by Rodion Ingles
    I have a function where I have a container which holds strings (eg vector<string>, set<string>, list<string>) and, given a start iterator and an end iterator, go through the iterator range processing the strings. Currently the function is declared like this: template< typename ContainerIter> void ProcessStrings(ContainerIter begin, ContainerIter end); Now this will accept any type which conforms to the implicit interface of implementing operator*, prefix operator++ and whatever other calls are in the function body. What I really want to do is have a definition like the one below which explicitly restricts the amount of input (pseudocode warning): template< typename Container<string>::iterator> void ProcessStrings(Container<string>::iterator begin, Container<string>::iterator end); so that I can use it as such: vector<string> str_vec; list<string> str_list; set<SomeOtherClass> so_set; ProcessStrings(str_vec.begin(), str_vec.end()); // OK ProcessStrings(str_list.begin(), str_list.end()); //OK ProcessStrings(so_set.begin(), so_set.end()); // Error Essentially, what I am trying to do is restrict the function specification to make it obvious to a user of the function what it accepts and if the code fails to compile they get a message that they are using the wrong parameter types rather than something in the function body that XXX function could not be found for XXX class.

    Read the article

  • XPath ordered priority attribute search

    - by user94000
    I want to write an XPath that can return some link elements on an HTML DOM. The syntax is wrong, but here is the gist of what I want: //web:link[@text='Login' THEN_TRY @href='login.php' THEN_TRY @index=0] THEN_TRY is a made-up operator, because I can't find what operator(s) to use. If many links exist on the page for the given set of [attribute=name] pairs, the link which matches the most left-most attribute(s) should be returned instead of any others. For example, consider a case where the above example XPath finds 3 links that match any of the given attributes: link A: text='Sign In', href='Login.php', index=0 link B: text='Login', href='Signin.php', index=15 link C: text='Login', href='Login.php', index=22 Link C ranks as the best match because it matches the First and Second attributes. Link B ranks second because it only matches the First attribute. Link A ranks last because it does not match the First attribute; it only matches the Second and Third attributes. The XPath should return the best match, Link C. If more than one link were tied for "best match", the XPath should return the first best link that it found on the page.

    Read the article

  • How do actually castings work at the CLR level?

    - by devoured elysium
    When doing an upcast or downcast, what does really happen behind the scenes? I had the idea that when doing something as: string myString = "abc"; object myObject = myString; string myStringBack = (string)myObject; the cast in the last line would have as only purpose tell the compiler we are safe we are not doing anything wrong. So, I had the idea that actually no casting code would be embedded in the code itself. It seems I was wrong: .maxstack 1 .locals init ( [0] string myString, [1] object myObject, [2] string myStringBack) L_0000: nop L_0001: ldstr "abc" L_0006: stloc.0 L_0007: ldloc.0 L_0008: stloc.1 L_0009: ldloc.1 L_000a: castclass string L_000f: stloc.2 L_0010: ret Why does the CLR need something like castclass string? There are two possible implementations for a downcast: You require a castclass something. When you get to the line of code that does an castclass, the CLR tries to make the cast. But then, what would happen had I ommited the castclass string line and tried to run the code? You don't require a castclass. As all reference types have a similar internal structure, if you try to use a string on an Form instance, it will throw an exception of wrong usage (because it detects a Form is not a string or any of its subtypes). Also, is the following statamente from C# 4.0 in a Nutshell correct? Upcasting and downcasting between compatible reference types performs reference conversions: a new reference is created that points to the same object. Does it really create a new reference? I thought it'd be the same reference, only stored in a different type of variable. Thanks

    Read the article

  • How to avoid the following purify detected memory leak in C++?

    - by Abhijeet
    Hi, I am getting the following memory leak.Its being probably caused by std::string. how can i avoid it? PLK: 23 bytes potentially leaked at 0xeb68278 * Suppressed in /vobs/ubtssw_brrm/test/testcases/.purify [line 3] * This memory was allocated from: malloc [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] operator new(unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] operator new(unsigned) [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_S_create(unsigned, unsigned, std::allocator<char> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/ x86/586/target/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_M_clone(std::allocator<char> const&, unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/tar get/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::string<char, std::char_traits<char>, std::allocator<char>>(std::string<char, std::char_traits<char>, std::alloc ator<char>> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] uec_UEDir::getEntryToUpdateAfterInsertion(rcapi_ImsiGsmMap const&, rcapi_ImsiGsmMap&, std::_Rb_tree_iterator<std::pair<std::string<char, std::char_traits<char>, std::allocator< char>> const, UEDirData >>&) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:2278] uec_UEDir::addUpdate(rcapi_ImsiGsmMap const&, LocalUEDirInfo&, rcapi_ImsiGsmMap&, int, unsigned char) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:282] ucx_UEDirHandler::addUpdateUEDir(rcapi_ImsiGsmMap, UEDirUpdateType, acap_PresenceEvent) [/vobs/ubtssw_brrm/ucx/linux-x86/../src/ucx_UEDirHandler.cc:374]

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • Groovy as a substitute for Java when using BigDecimal?

    - by geejay
    I have just completed an evaluation of Java, Groovy and Scala. The factors I considered were: readability, precision The factors I would like to know: performance, ease of integration I needed a BigDecimal level of precision. Here are my results: Java void someOp() { BigDecimal del_theta_1 = toDec(6); BigDecimal del_theta_2 = toDec(2); BigDecimal del_theta_m = toDec(0); del_theta_m = abs(del_theta_1.subtract(del_theta_2)) .divide(log(del_theta_1.divide(del_theta_2))); } Groovy void someOp() { def del_theta_1 = 6.0 def del_theta_2 = 2.0 def del_theta_m = 0.0 del_theta_m = Math.abs(del_theta_1 - del_theta_2) / Math.log(del_theta_1 / del_theta_2); } Scala def other(){ var del_theta_1 = toDec(6); var del_theta_2 = toDec(2); var del_theta_m = toDec(0); del_theta_m = ( abs(del_theta_1 - del_theta_2) / log(del_theta_1 / del_theta_2) ) } Note that in Java and Scala I used static imports. Java: Pros: it is Java Cons: no operator overloading (lots o methods), barely readable/codeable Groovy: Pros: default BigDecimal means no visible typing, least surprising BigDecimal support for all operations (division included) Cons: another language to learn Scala: Pros: has operator overloading for BigDecimal Cons: some surprising behaviour with division (fixed with Decimal128), another language to learn

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • User defined literal arguments are not constexpr?

    - by Pubby
    I'm testing out user defined literals. I want to make _fac return the factorial of the number. Having it call a constexpr function works, however it doesn't let me do it with templates as the compiler complains that the arguments are not and cannot be constexpr. I'm confused by this - aren't literals constant expressions? The 5 in 5_fac is always a literal that can be evaluated during compile time, so why can't I use it as such? First method: constexpr int factorial_function(int x) { return (x > 0) ? x * factorial_function(x - 1) : 1; } constexpr int operator "" _fac(unsigned long long x) { return factorial_function(x); // this works } Second method: template <int N> struct factorial { static const unsigned int value = N * factorial<N - 1>::value; }; template <> struct factorial<0> { static const unsigned int value = 1; }; constexpr int operator "" _fac(unsigned long long x) { return factorial_template<x>::value; // doesn't work - x is not a constexpr }

    Read the article

  • OOP C# Question: Making a Fruit a Pear

    - by Adam Kane
    Given that I have an instance of Fruit with some properties set, and I want to get those properties into a new Pear instance (because this particular Fruit happens to have the qualities of a pear), what's the best way to achieve this effect? For example, what we can't do is simple cast a Fruit to a Pear, because not all Fruits are Pears: public static class PearGenerator { public static Pear CreatePear () { // Make a new generic fruit. Fruit genericFruit = new Fruit(); // Upcast it to a pear. (Throws exception: Can't cast a Fruit to a Pear.) Pear pear = (Pear)genericFruit; // Return freshly grown pear. return ( pear ); } } public class Fruit { // some code } public class Pear : Fruit { public void PutInPie () { // some code } } Thanks! Update: I don't control the "new Fruit()" code. My starting point is that I've got a Fruit to work with. I need to get that Fruit into a new Pear somehow. Maybe copy all the properties one by one?

    Read the article

  • LLVM: Passing a pointer to a struct, which holds a pointer to a function, to a JIT function

    - by Rusky
    I have an LLVM (version 2.7) module with a function that takes a pointer to a struct. That struct contains a function pointer to a C++ function. The module function is going to be JIT-compiled, and I need to build that struct in C++ using the LLVM API. I can't seem get the pointer to the function as an LLVM value, let alone pass a pointer to the ConstantStruct that I can't build. I'm not sure if I'm even on the track, but this is what I have so far: void print(char*); vector<Constant*> functions; functions.push_back(ConstantExpr::getIntToPtr( ConstantInt::get(Type::getInt32Ty(context), (int)print), /* function pointer type here, FunctionType::get(...) doesn't seem to work */ )); ConstantStruct* struct = cast<ConstantStruct>(ConstantStruct::get( cast<StructType>(m->getTypeByName("printer")), functions )); Function* main = m->getFunction("main"); vector<GenericValue> args; args[0].PointerVal = /* not sure what goes here */ ee->runFunction(main, args);

    Read the article

  • Java RMI Proxy issue

    - by Antony Lewis
    i am getting this error : java.lang.ClassCastException: $Proxy0 cannot be cast to rmi.engine.Call at Main.main(Main.java:39) my abstract and call class both extend remote. call: public class Call extends UnicastRemoteObject implements rmi.engine.Abstract { public Call() throws Exception { super(Store.PORT, new RClient(), new RServer()); } public String getHello() { System.out.println("CONN"); return "HEY"; } } abstract: public interface Abstract extends Remote { String getHello() throws RemoteException; } this is my main: public static void main(String[] args) { if (args.length == 0) { try { System.out.println("We are slave "); InetAddress ip = InetAddress.getLocalHost(); Registry rr = LocateRegistry.getRegistry(ip.getHostAddress(), Store.PORT, new RClient()); Object ss = rr.lookup("FILLER"); System.out.println(ss.getClass().getCanonicalName()); System.out.println(((Call)ss).getHello()); } catch (Exception e) { e.printStackTrace(); } } else { if (args[0].equals("master")) { // Start Master try { RMIServer.start(); } catch (Exception e) { e.printStackTrace(); } } Netbeans says the problem is on line 39 which is System.out.println(((Call)ss).getHello()); the output looks like this: run: We are slave Connecting 10.0.0.212:5225 $Proxy0 java.lang.ClassCastException: $Proxy0 cannot be cast to rmi.engine.Call at Main.main(Main.java:39) BUILD SUCCESSFUL (total time: 1 second) i am running a master in cmd listening on port 5225.

    Read the article

  • Google Adwords API response parse

    - by Yun Ling
    I am trying to figure out how to parse the Adword API query response without exceptions and one issue that i came across is that sometimes, the data itself contains comma besides the comma between each column. Say i do a query on Adroup, campaign and impression by using <reportDefinition xmlns="https://adwords.google.com/api/adwords/cm/v201209"> <selector> <fields>CampaignName</fields> <fields>AdgroupName</fields> <fields>Impressions</fields> <predicates> <field>Status</field> <operator>IN</operator> <values>ENABLED</values> <values>PAUSED</values> </predicates> </selector> <reportName>Custom Adgroup Performance Report</reportName> <reportType>ADGROUP_PERFORMANCE_REPORT</reportType> <dateRangeType>LAST_7_DAYS</dateRangeType> <downloadFormat>CSV</downloadFormat> </reportDefinition> Since my campaign has comma within the string like below: "Adroup,Campaign,Impressions, Premiun Beer, Beer, Chicago, 1000" where the adgroup is "premium beer" and campaign is "Beer,Chicago". that will cause an issue if we parse this information by using comma. Does anyone know how to solve this problem?

    Read the article

  • What C++ templates issue is going on with this error?

    - by WilliamKF
    Running gcc v3.4.6 on the Botan v1.8.8 I get the following compile time error building my application after successfully building Botan and running its self test: ../../src/Botan-1.8.8/build/include/botan/secmem.h: In member function `Botan::MemoryVector<T>& Botan::MemoryVector<T>::operator=(const Botan::MemoryRegion<T>&)': ../../src/Botan-1.8.8/build/include/botan/secmem.h:310: error: missing template arguments before '(' token What is this compiler error telling me? Here is a snippet of secmem.h that includes line 130: [...] /** * This class represents variable length buffers that do not * make use of memory locking. */ template<typename T> class MemoryVector : public MemoryRegion<T> { public: /** * Copy the contents of another buffer into this buffer. * @param in the buffer to copy the contents from * @return a reference to *this */ MemoryVector<T>& operator=(const MemoryRegion<T>& in) { if(this != &in) set(in); return (*this); } // This is line 130! [...]

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

  • Conditional row count in linq to nhibernate doesn't work

    - by Lucasus
    I want to translate following simple sql query into Linq to NHibernate: SELECT NewsId ,sum(n.UserHits) as 'HitsNumber' ,sum(CASE WHEN n.UserHits > 0 THEN 1 ELSE 0 END) as 'VisitorsNumber' FROM UserNews n GROUP BY n.NewsId My simplified UserNews class: public class AktualnosciUzytkownik { public virtual int UserNewsId { get; set; } public virtual int UserHits { get; set; } public virtual User User { get; set; } // UserId key in db table public virtual News News { get; set; } // NewsId key in db table } I've written following linq query: var hitsPerNews = (from n in Session.Query<UserNews>() group n by n.News.NewsId into g select new { NewsId = g.Key, HitsNumber = g.Sum(x => x.UserHits), VisitorsNumber = g.Count(x => x.UserHits > 0) }).ToList(); But generated sql just ignores my x => x.UserHits > 0 statement and makes unnecessary 'left outer join': SELECT news1_.NewsId AS col_0_0_, CAST(SUM(news0_.UserHits) AS INT) AS col_1_0_, CAST(COUNT(*) AS INT) AS col_2_0_ FROM UserNews news0_ LEFT OUTER JOIN News news1_ ON news0_.NewsId=news1_.NewsId GROUP BY news1_.NewsId How Can I fix or workaround this issue? Maybe this can be done better with QueryOver syntax?

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Access to map data

    - by herzl shemuelian
    I have a complex map that defined typedef short short1 typedef short short2 typedef map<short1,short2> data_list; typedef map<string,list> table_list; I have a class that fill table_list class GroupingClass { table_list m_table_list; string Buildkey(OD e1){ string ostring; ostring+=string(e1.m_Date,sizeof(Date)); ostring+=string(e1.m_CT,sizeof(CT)); ostring+=string(e1.m_PT,sizeof(PT)); return ostring; } void operator() (const map<short1,short2>::value_type& myPair) { OptionsDefine e1=myPair.second; string key=Buildkey(e1); m_table_list[key][e1.m_short2]=e1.m_short2; } operator table_list() { return m_table_list; } }; and I use it by table_list TL2 GroupingClass gc; TL2=for_each(mapOD.begin(), mapOD.end(), gc); but when I try to access to internal map I have problems for example data_list tmp; tmp=TL2["AAAA"]; short i=tmp[1]; //I dont update i variable but if i use a loop by itrator this work properly why this no work at first way thanks herzl

    Read the article

  • MMGR Questions, code use and thread-saftey

    - by chadb
    1) Is MMGR thread safe? 2) I was hoping someone could help me understand some code. I am looking at something where a macro is used, but I don't understand the macro. I know it contains a function call and an if check, however, the function is a void function. How does wrapping "(m_setOwner (FILE,_LINE_,FUNCTION),false)" ever change return types? #define someMacro (m_setOwner(__FILE__,__LINE__,__FUNCTION__),false) ? NULL : new ... void m_setOwner(const char *file, const unsigned int line, const char *func); 3) What is the point of the reservoir? 4) On line 770 ("void *operator new(size_t reportedSize)" there is the line "// ANSI says: allocation requests of 0 bytes will still return a valid value" Who/what is ANSI in this context? Do they mean the standards? 5) This is more of C++ standards, but where does "reportedSize" come from for "void *operator new(size_t reportedSize)"? 6) Is this the code that is actually doing the allocation needed? "au-actualAddress = malloc(au-actualSize);"

    Read the article

  • Jersey, JAXB and getting an objectextending an abstract class as a parameter

    - by krajol
    I want to get an object as a parameter of a POST request. I got an abstract superclass that is called Promotion and subclasses Product and Percent. Here's how I try to get a request: @POST @Consumes(MediaType.APPLICATION_XML) @Produces(MediaType.APPLICATION_XML) @Path("promotion/") public Promotion createPromotion(Promotion promotion) { Product p = (Product) promotion; System.out.println(p.getPriceAfter()); return promotion; } and here's how I use JAXB in classes' definitions: @XmlRootElement(name="promotion") @XmlSeeAlso({Product.class,Percent.class}) public abstract class Promotion { //body } @XmlRootElement(name="promotion") public class Product extends Promotion { //body } @XmlRootElement(name="promotion") public class Percent extends Promotion { //body } So the problem now is when I send a POST request with a body like this: <promotion> <priceBefore>34.5</priceBefore> <marked>false</marked> <distance>44</distance> </promotion> and I try to cast it to Product (as in this case, fields 'marked' and 'distance' are from Promotion class and 'priceBefore' is from Product class) I get an Exception: java.lang.ClassCastException: Percent cannot be cast to Product. It seems like Percent is chosen as a 'default' subclass. Why is that and how can I get an object that is a Product?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >