Search Results

Search found 8616 results on 345 pages for 'primitive types'.

Page 108/345 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • question/problem regarding assigning an array of char *

    - by Fantastic Fourier
    Hi I'm working with C and I have a question about assigning pointers. struct foo { int _bar; char * _car[MAXINT]; // this is meant to be an array of char * so that it can hold pointers to names of cars } int foofunc (void * arg) { int bar; char * car[MAXINT]; struct foo thing = (struct foo *) arg; bar = arg->_bar; // this works fine car = arg->_car; // this gives compiler errors of incompatible types in assignment } car and _car have same declaration so why am I getting an error about incompatible types? My guess is that it has something to do with them being pointers (because they are pointers to arrays of char *, right?) but I don't see why that is a problem. when i declared char * car; instead of char * car[MAXINT]; it compiles fine. but I don't see how that would be useful to me later when I need to access certain info using index, it would be very annoying to access that info later. in fact, I'm not even sure if I am going about the right way, maybe there is a better way to store a bunch of strings instead of using array of char *?

    Read the article

  • Do the new NoPIA and Type Equivalence features in C#/.NET 4.0 mean Microsoft.mshtml.dll is no longer

    - by jpierson
    I'm maintaining a WPF based application which contains a WinForms based WebBrowser control that based on the IE web browser control. When we deploy, we have had to also supply Microsoft.mshtml.dll and do some custom configuration stuff for our ClickOnce publishing process as well in order to get things to work. I'm curious that with the new NoPIA and Type Equivalence features and dynamic type capabilities in C# 4.0 can we expect that if we upgrade that we can remove the dependencies on the Microsoft.mshtml.dll assembly? If so this will not only reduce the size of our deployment quite a bit but will also simplify our publishing process as well. It is my understanding that we should be able embed the types that normally get automatically generated into extra assemblies for COM types such as the MapPoint Control by Visual Studio. I don't know if this also applies to the Microsoft.mshtml.dll or even how it is done even in the most simple of cases. If somebody could provide an explanation about what the practical impact of these new features are on a project that relies on COM interop and especially the Microsoft.mshtml.dll assembly it would be of great help to me.

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • strict aliasing and alignment

    - by cooky451
    I need a safe way to alias between arbitrary POD types, conforming to ISO-C++11 explicitly considering 3.10/10 and 3.11 of n3242 or later. There are a lot of questions about strict aliasing here, most of them regarding C and not C++. I found a "solution" for C which uses unions, probably using this section union type that includes one of the aforementioned types among its elements or nonstatic data members From that I built this. #include <iostream> template <typename T, typename U> T& access_as(U* p) { union dummy_union { U dummy; T destination; }; dummy_union* u = (dummy_union*)p; return u->destination; } struct test { short s; int i; }; int main() { int buf[2]; static_assert(sizeof(buf) >= sizeof(double), ""); static_assert(sizeof(buf) >= sizeof(test), ""); access_as<double>(buf) = 42.1337; std::cout << access_as<double>(buf) << '\n'; access_as<test>(buf).s = 42; access_as<test>(buf).i = 1234; std::cout << access_as<test>(buf).s << '\n'; std::cout << access_as<test>(buf).i << '\n'; } My question is, just to be sure, is this program legal according to the standard?* It doesn't give any warnings whatsoever and works fine when compiling with MinGW/GCC 4.6.2 using: g++ -std=c++0x -Wall -Wextra -O3 -fstrict-aliasing -o alias.exe alias.cpp * Edit: And if not, how could one modify this to be legal?

    Read the article

  • Problems with reading into buffer using boost::asio::async_read

    - by Max
    Good day. I have a Types.hpp file in my project. And within it i have: .... namespace RC { ..... ..... struct ViewSettings { .... }; ..... } In the Server.cpp file I'm including this Types.hpp file, and i have there: class Session { ..... RC::ViewSettings tmp; boost::asio::async_read(socket_, boost::asio::buffer(&tmp, sizeof(RC::ViewSettings)), boost::bind(&Session::Finish_Reading_Data, shared_from_this(), boost::asio::placeholders::error)); ..... } And during the compilation i have an errors: error C2825: 'F': must be a class or namespace when followed by '::' : see reference to class template instantiation 'boost::_bi::result_traits<R,F>' being compiled with [ R=boost::_bi::unspecified, F=void (__thiscall Session::* )(void) ] : see reference to class template instantiation 'boost::_bi::bind_t<R,F,L>' being compiled with [ R=boost::_bi::unspecified, F=void (__thiscall Session::* )(void), L=boost::_bi::list2<boost::_bi::value<boost::shared_ptr<Session>>,boost::arg<1>> ] error C2039: 'result_type' : is not a member of '`global namespace'' And the code like this works in proper way: int w; boost::asio::async_read(socket_, boost::asio::buffer(&w, sizeof(int)), boost::bind(&Session::Handle_Read_Width, shared_from_this(), boost::asio::placeholders::error)); Please, help. What's the problem here? Thanks in advance.

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • Error when calling SQL SP via LINQ

    - by PaulC
    Newbie problem: I have a SQL SP with ten parameters (eight input, two output) but when I attempt to call it via LINQ from code I get the following error message: "The best overloaded method match for 'DataClassesDataContext.ST_CR_CREATE_CASE_BASIS(string, string, string, string, System.DateTime?, string, string, string, ref int?, ref int?)' has some invalid arguments". The params with ? appear to be unrecognized, but I'm baffled: the data types match the SQL types, the number of parameters match, the other parmeters don't exhibit the same behaviour. Can anyone tell me what's going on? Thanks in advance. -- SQL SP: create procedure ST_CR_CREATE_CASE_BASIS @p_Pers_No nvarchar (50), @p_Subject nvarchar (255), @p_RQ_XML nvarchar(max), @p_RQ_XSL nvarchar(max), @p_Date_Submit smalldatetime, @p_User_ID_Submit nvarchar (255), @p_RQ_Status nvarchar (50), @p_User_ID_OnBehalf nvarchar (255), @p_Case_Number int output, @p_RQ_ID int output as begin -- ... etc.; the SP works fine when called from SSMS The code-behind proc from the aspx page looks like this: protected void cmdSubmit_Click(object sender, EventArgs e) { using (DataClassesDataContext vDataCont = new DataClassesDataContext()) { Int32 vNewCaseNr; Int32 vNewReqNr; DateTime vNow = System.DateTime.Now; vDataCont.ST_CR_CREATE_CASE_BASIS("101", "Test Subject Late Wed", null, null, vNow , "101", "1", "101", ref vNewCaseNr, vNewReqNr); } }

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • How does the last integer promotion rule ever get applied in C?

    - by SiegeX
    6.3.1.8p1: Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. For the bolded rule to be applied it would seem to imply you need to have have an unsigned interger type who's rank is less than the signed integer type and the signed integer type cannot hold all the values of the unsigned integer type. Is there a real world example of such a case or is this statement serving as a catch-all to cover all possible permutations?

    Read the article

  • How do I so a select input for a STI column in a Rails model?

    - by James A. Rosen
    I have a model with single-table inheritance on the type column: class Pet < ActiveRecord::Base TYPES = [Dog, Cat, Hamster] validates_presence_of :name end I want to offer a <select> dropdown on the new and edit pages: <% form_for @model do |f| %> <%= f.label :name %> <%= f.text_input :name %> <%= f.label :type %> <%= f.select :type, Pet::TYPES.map { |t| [t.human_name, t.to_s] } %> <% end %> That gives me the following error: ActionView::TemplateError (wrong argument type String (expected Module)) I read a suggestion to use an alias for the field #type since Ruby considers that a reserved word that's the same as #class. I tried both class Pet < ActiveRecord::Base ... alias_attribute :klass, :type end and class Pet < ActiveRecord::Base ... def klass self.type end def klass=(k) self.type = k end end Neither worked. Any suggestions? Oddly, it works fine on my machine (MRI 1.8.6 on RVM), but fails on the staging server (MRI 1.8.7 not on RVM).

    Read the article

  • How to parse XML file to get specific data effectively

    - by Carlos_Liu
    I have an XML file looks like this: <?xml version="1.0" encoding="utf-8" ?> <PathMasks> <Mask desc="Masks_X1"> <config id="01" mask="88" /> <config id="03" mask="80" /> <config id="51" mask="85" /> </Mask> <Mask desc="Masks_X2"> <config id="70" mask="1" /> <config id="73" mask="6" /> </Mask> <Types> <path id="01" desc="TC->PP1" /> <path id="02" desc="TC->PP2" /> <path id="03" desc="TC->PPn" /> </Types> </PathMasks> How to parse the file and get all the data of Mask_X1 as following: id value ===== 01, 88 03, 80 51, 85

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • Embedding Lua functions as member variables in Java

    - by Zarion
    Although the program I'm working on is in Java, answering this from a C perspective is also fine, considering that most of this is either language-agnostic, or happens on the Lua side of things. In the outline I have for the architecture of a game I'm programming, individual types of game objects within a particular class (eg: creatures, items, spells, etc.) are loaded from a data file. Most of their properties are simple data types, but I'd like a few of these members to actually contain simple scripts that define, for example, what an item does when it's used. The scripts will be extremely simple, since all fundamental game actions will be exposed through an API from Java. The Lua is simply responsible for stringing a couple of these basic functions together, and setting arguments. The question is largely about the best way to store a reference to a specific Lua function as a member of a Java class. I understand that if I store the Lua code as a string and call lua_dostring, Lua will compile the code fresh every time it's called. So the function needs to be defined somehow, and a reference to this specific function wrapped in a Java function object. One possibility that I've considered is, during the data loading process, when the loader encounters a script definition in a data file, it extracts this string, decorates the function name using the associated object's unique ID, calls lua_dostring on the string containing a full function definition, and then wraps the generated function name in a Java function object. A function declared in script run with lua_dostring should still be added to the global function table, correct? I'm just wondering if there's a better way of going about this. I admit that my knowledge of Lua at this point is rather superficial and theoretical, so it's possible that I'm overlooking something obvious.

    Read the article

  • Dynamically populated NSPopUpButtonCell menu in an NSOutlineView

    - by Mo
    I’m working with an NSOutlineView which has two columns. My dataSource supplies the outline view with a tree of items of a custom class which represents file types (that is, you initialise it with a UTI). The first column is the display name of the file type (e.g., “Source code”, “Interface Builder NIB document”, etc.). The second column is an NSPopUpButtonCell which is supposed to allow the user to pick a handler for the given document type (think of Xcode’s “File Types” preference pane, and you’re pretty much there). I can generate an NSMenu for a given item in the tree, populated with options based upon the Launch Services database entries for the UTI, complete with the relevant application icon and and so on. In fact, the menu itself works wonderfully, populated by way of NSPopUpButtonCellWillPopUpNotification. The problem is, try as I might, the cell when the menu isn’t popped up always contains precisely one of two things: either an empty string, or the default text for the cell, the former if the result of -handlerName on the item (the attribute assigned to the column) is non-nil, the latter otherwise. Moreover, I’m manually calling -selectItem: on the NSPopUpButtonCell instance, which just seems Wrong. In contrast, the left-hand column, which is just an NSTextFieldCell, everything just works (although granted, all it’s got to do is read the value from the item and present it). (Disclaimer: I’m fairly new at Cocoa UI stuff; I know Objective-C, and lots of other programming languages, but I’ve not a huge amount of experience of building Mac OS X UIs, so be gentle).

    Read the article

  • How to program critical section for reader-writer systems?

    - by Srinivas Nayak
    Hi, Lets say, I have a reader-writer system where reader and writer are concurrently running. 'a' and 'b' are two shared variables, which are related to each other, so modification to them needs to be an atomic operation. A reader-writer system can be of the following types: rr ww r-w r-ww rr-w rr-ww where [ r : single reader rr: multiple reader w : single writer ww: multiple writer ] Now, We can have a read method for a reader and a write method for a writer as follows. I have written them system type wise. rr read_method { read a; read b; } ww write_method { lock(m); write a; write b; unlock(m); } r-w r-ww rr-w rr-ww read_method { lock(m); read a; read b; unlock(m); } write_method { lock(m); write a; write b; unlock(m); } For multiple reader system, shared variable access doesn't need to be atomic. For multiple writer system, shared variable access need to be atomic, so locked with 'm'. But, for system types 3 to 6, is my read_method and write_method correct? How can I improve? Sincerely, Srinivas Nayak

    Read the article

  • Using generics in Unity ... InvalidCastException

    - by Sunny D
    Hi, My interface definition is: public interface IInterface where T:UserControl My class definition is: public partial class App1Control : UserControl, IInterface The unity section of my app.config looks as below: <unity> <typeAliases> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="myInterface" type="MyApplication.IInterface`1, MyApplication" /> <typeAlias alias="App1" type="MyApplication.App1Control, MyApplication" /> </typeAliases> <containers> <container> <types> <type type="myInterface" mapTo="App1" name="Application 1"> <lifetime type="singleton"/> </type> </types> </container> </containers> </unity> The app runs fine but, the following code gives a InvalidCastException container.Resolve<IInterface<UserControl>>("Application 1"); The error message is : Unable to cast object of type 'MyApplication.App1Control' to type 'MyApplication.IInterface`1[System.Windows.Forms.UserControl]' I believe there is a minor mistake in my code ... but am not able to figure out what. Any thoughts?

    Read the article

  • Regular expression either/or not matching everything

    - by dwatransit
    I'm trying to parse an HTTP GET request to determine if the url contains any of a number of file types. If it does, I want to capture the entire request. There is something I don't understand about ORing. The following regular expression only captures part of it, and only if .flv is the first int the list of ORd values. (I've obscured the urls with spaces because Stackoverflow limits hyperlinks) regex: GET.?(.flv)|(.mp4)|(.avi).? test text: GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy match output: GET http: // foo.server.com/download/0/37/3000016511/.flv I don't understand why the .*? at the end of the regex isnt callowing it to capture the entire text. If I get rid of the ORing of file types, then it works. Here is the test code in case my explanation doesn't make sense: public static void main(String[] args) { // TODO Auto-generated method stub String sourcestring = "GET http: // foo.server.com/download/0/37/3000016511/.flv?mt=video/xy"; Pattern re = Pattern.compile("GET .?\.flv."); // this works //output: // [0][0] = GET http :// foo.server.com/download/0/37/3000016511/.flv?mt=video/xy // the match from the following ends with the ".flv", not the entire url. // also it only works if .flv is the first of the 3 ORd options //Pattern re = Pattern.compile("GET .?(\.flv)|(\.mp4)|(\.avi).?"); // output: //[0][0] = GET http: // foo.server.com/download/0/37/3000016511/.flv // [0][1] = .flv // [0][2] = null // [0][3] = null Matcher m = re.matcher(sourcestring); int mIdx = 0; while (m.find()){ for( int groupIdx = 0; groupIdx < m.groupCount()+1; groupIdx++ ){ System.out.println( "[" + mIdx + "][" + groupIdx + "] = " + m.group(groupIdx)); } mIdx++; } } }

    Read the article

  • WPF - Why doesn't Microsoft supply a decent set of most-used controls ?

    - by IUsedToBeAPygmy
    I've been playing with WPF for some months now, and I quite like it. But one of the things I don't get is why MS doesn't put a little more effort in helping developers by supplying basic controls, and I need to get this off my chest :) For example, I figure most applications somewhere will need to let you edit some properties - for configuration or whatever. What would be the most used types in a proprety-grid editor ? text numbers (byte, float/double, int, etc) colors ....etc. So why isn't there even something as simple as a control to edit numbers ? Like a generic NumericUpDown control that allows you to type in numbers (no text, no pasting invalid input) or spin them up/down according to some given rules (decimal, floating point, min/maxvalue) ? Why isn't there a generic colorpicker, so people get the same user-experience in every application ? Why isn't there a standard implementation of a SearchTextBox, a BreadCrumb-control, or all these other standard control types users have gotten accustomed to the last 10 years ? (..but at least they DID have the time to implement a generic splashscreen - because everyone knows that greatly increases user-productivity....) The well-known ideal is always to give people the same user-experience over different applications. So even if some of those controls would be easy to make - it would be preferred to have one version over different applications. I see people all over the internet trying to do the same stuff over and over again. Okay, so MS started a WPF Toolkit project on Codeplex that tries to implement some controls, but only did so half-heartedly and is completely dead by now (last update of the roadmap dates back to Mar 21 2009). The result of this is that a lot of people starting a WPF-project end up spending a lot of time on trying to figure out how to create some generic controls and get really frustrated. Wasn't the mantra "Developers, developers, developers!" ..? /Rant

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

  • Inner join and outer join options in Entity Framework 4.0

    - by bigb
    I am using EF 4.0 and I need to implement query with one inner join and with N outer joins I started to implement this using different approaches but get into trouble at some point. Here is two examples how I started of doing this using ObjectQuery<'T' and Linq to Entity 1)Using ObjectQuery<'T' I implement flexible outer join but I don't know how to perform inner join with entity Rules in that case (by default Include("Rules") doing outer join, but i need to inner join by Id). public static IEnumerable<Race> GetRace(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); ObjectQuery<Race> result = (ObjectQuery<Race>)repository.AsQueryable<Race>(); //perform outer joins with related entities if (includes != null) foreach (string include in includes) result = result.Include(include); //here i need inner join insteard of default outer join result = result.Include("Rules"); return result.ToList(); } 2)Using Linq To Entity I need to have kind of outer join(somethin like in GetRace()) where i may pass a List with entities to include) and also i need to perform correct inner join with entity Rules public static IEnumerable<Race> GetRace2(List<string> includes, DateTime date) { IRepository repository = new Repository(new BEntities()); IEnumerable<Race> result = from o in repository.AsQueryable<Race>() from b in o.RaceBetRules select new { o }); //I need here: // 1. to perform the same way inner joins with related entities like with ObjectQuery above //here i getting List<AnonymousType> which i cant cast to //IEnumerable<Race> when i did try to cast like //(IEnumerable<Race>)result.ToList(); i did get error: //Unable to cast object of type //'System.Collections.Generic.List`1[<>f__AnonymousType0`1[BetsTipster.Entity.Tip.Types.Race]]' //to type //'System.Collections.Generic.IEnumerable`1[BetsTipster.Entity.Tip.Types.Race]'. return result.ToList(); } May be someone have some ideas about that.

    Read the article

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • What's the difference between an option type and a nullable type?

    - by Peter Olson
    In F# mantra there seems to be a visceral avoidance of null, Nullable<T> and its ilk. In exchange, we are supposed to instead use option types. To be honest, I don't really see the difference. My understanding of the F# option type is that it allows you to specify a type which can contain any of its normal values, or None. For example, an Option<int> allows all of the values that an int can have, in addition to None. My understanding of the C# nullable types is that it allows you to specify a type which can contain any of its normal values, or null. For example, a Nullable<int> a.k.a int? allows all of the values that an int can have, in addition to null. What's the difference? Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • Duplicating custom nodes in JavaFX

    - by Elazar Leibovich
    As far as I understand, duplicating nodes in JavaFX should be done with the Duplicator.duplicate function. It works fine when duplicating nodes whose types are included in JavaFX library, for example def dup = Duplicator.duplicate(Rectangle{x:30 y:30 width:100 height:100}); dup.translateX = 10; insert dup into content; would insert a black rectangle to the scene. However if I define a new class in the following way: class MyRect extends Rectangle {} Or class MyRect extends CustomNode { override function create() {Rectangle{x:30 y:30 width:10 height:10}} } It gives me the following runtime error Type 'javafxapplication1.NumberGrid$MyRect' not found. Where of course javafxapplication1.NumberGrid are the package and file the MyRect class is in. This guy at Sun's forums had the same problem, but I don't see any answer in there. Or maybe I'm doing that the wrong way, and there's a better approach for duplicating custom nodes? update: Trying to duplicate Group worked, but trying to duplicate Stack yields the same error. According to the documentation, it's supposed to support all types supported in FXD including Node, but maybe it supports only some of Node's descendants?

    Read the article

  • WCF ServiceContract and svcutil issue

    - by Valko
    Hi, I have a public interface auto-generated bu svcutil: [System.ServiceModel.ServiceContractAttribute(Namespace="...", ConfigurationName="...")] public interface MyInterface Then I have asmx web service inheriting it and working fine. I am trying ot convert it to WCF but when I instrument the service (in asmx.cs code behind) with ServiceContract: [ServiceContract(Namespace = "...")] public class MyService : MyInterface Also I have cerated .svc file and added the system.serviceModel setting in the config file. The goal is to migrate the asmx service to WCF service. I've got this error: The service class of type MyService both defines a ServiceContract and inherits a ServiceContract from type MyInterface. Contract inheritance can only be used among interface types. If a class is marked with ServiceContractAttribute, it must be the only type in the hierarchy with ServiceContractAttribute. The asmx service is still working fine. Only the .svc is giving me issues. My question is how to fix that. MyInterface is an interface so I do not see what the problem is and why I've got the error anyway. Note I do not want to change MyInterface, because it is autogenerated from svcutil from my wsdl schema and I do not want this interface to be edited manually. The whole idea is to have the service types automatically genereted from WSDL and to save my team development efforts with manual editing. Any help is appreciated.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >