Search Results

Search found 587 results on 24 pages for 'mapper'.

Page 17/24 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • AutoMapper How to map nested object from an ObjectId

    - by RayMartinsHair
    I am trying to map the ReferralContract.AssessmentId property to Referral.Assessment.Id The below code works but I am sure that there is a cleaner way to do.... Please tell me this is so ;-) // Destination classes public class Referral { public Referral() { Assessment = new Assessment(); } public int Id { get; set; } public Assessment Assessment { get; set; } } public class Assessment { public int Id { get; set; } } // Source Class public class ReferralContract { public int Id { get; set; } public int AssessmentId { get; set; } } The Automapper mapping I am using is Mapper.CreateMap<ReferralContract, Referral>() .ForMember(x => x.Assessment, opt => opt.MapFrom(scr => new Assessment { Id = scr.AssessmentId }));

    Read the article

  • SQL Alchemy related Objects Error

    - by alex
    from sqlalchemy.orm import relation, backref from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey, Date, Sequence from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class GUI_SCENARIO(Base): __tablename__ = 'GUI_SCENARIO' Scenario_ID = Column(Integer, primary_key=True) Definition_Date = Column(Date) guiScenarioDefinition = relation('GUI_SCENARIO_DEFINITION', order_by='GUI_SCENARIO_DEFINITION.Scenario_Definition_ID', backref='guiScenario') def __init__(self, Scenario_ID=None, Definition_Date=None): self.Scenario_ID = Scenario_ID self.Definition_Date = Definition_Date class GUI_SCENARIO_DEFINITION(Base): __tablename__='GUI_SCENARIO_DEFINITION' Scenario_Definition_ID = Column(Integer, Sequence('Scenario_Definition_ID_SEQ'), primary_key=True) Scenario_FK = Column(Integer, ForeignKey('GUI_SCENARIO.Scenario_ID')) Definition_Date=Column(Date) guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) def __init__(self, Scenario_FK, Definition_Date): self.Scenario_FK = Scenario_FK self.Definition_Date = Definition_Date guiScenario = relation(GUI_SCENARIO, backref=backref('guiScenarioDefinition', order_by=Scenario_Definition_ID)) tableNameScenario = "GUI_SCENARIO" scenarioClass = getattr(MappingTablesScenario, tableNameScenario) tableScenario = Table(tableNameScenario, meta, autoload=True) mapper(scenarioClass, tableScenario) scenarioName = scenarioDefinition.name scenarioDefinitionDate = datetime.today() newScenario = MappingTablesScenario.GUI_SCENARIO(scenarioName, scenarioDefinitionDate) print newScenario.guiScenarioDefinition If I try to get the objects related to a scenarioObject, I always get this error: AttributeError: 'GUI_SCENARIO' object has no attribute 'guiScenarioDefinition' Does anyone know, why I get this error?

    Read the article

  • Approaches to use Repositories (w/ StructureMap) with AutoMapper?

    - by jacko
    Any idea how I can tell AutoMapper to resolve a TypeConverter constructor argument using StructureMap? ie. We have this: private class GuidToContentProviderConverter : TypeConverter<Guid, ContentProvider> { private readonly IContentProviderRepository _repository; public GuidToContentProviderConverter(IContentProviderRepository repository) { _repository = repository; } protected override ContentProvider ConvertCore(Guid contentProviderId) { return _repository.Get(contentProviderId); } } And in the AutoMap registration: Mapper.CreateMap<Guid, ContentProvider>().ConstructUsing<GuidToContentProviderConverter>(); However, this generates a compile-time error about ctor args. Any ideas? Or ideas to other approaches for using Automapper to hydrate domain objects from viewmodel ID's using a repository?

    Read the article

  • Hadoop on windows server

    - by Luca Martinetti
    Hello, I'm thinking about using hadoop to process large text files on my existing windows 2003 servers (about 10 quad core machines with 16gb of RAM) The questions are: Is there any good tutorial on how to configure an hadoop cluster on windows? What are the requirements? java + cygwin + sshd ? Anything else? HDFS, does it play nice on windows? I'd like to use hadoop in streaming mode. Any advice, tool or trick to develop my own mapper / reducers in c#? What do you use for submitting and monitoring the jobs? Thanks

    Read the article

  • Multiple lines of text to a single map

    - by steven
    I've been trying to use Hadoop to send N amount of lines to a single mapping. I don't require for the lines to be split already. I've tried to use NLineInputFormat, however that sends N lines of text from the data to each mapper one line at a time [giving up after the Nth line]. I have tried to set the option and it only takes N lines of input sending it at 1 line at a time to each map: job.setInt("mapred.line.input.format.linespermap", 10); I've found a mailing list recommending me to override LineRecordReader::next, however that is not that simple, as that the internal data members are all private. I've just checked the source for NLineInputFormat and it hard codes LineReader, so overriding will not help. Also, btw I'm using Hadoop 0.18 for compatibility with the Amazon EC2 MapReduce.

    Read the article

  • Having difficulty in mapreduce to understand

    - by mahesh
    Hi all, I have seen the below link which is of getting started mapreduce with python http://code.google.com/p/appengine-mapreduce/wiki/GettingStartedInPython But still I am not able to understand how its working. I am executing below code but not able to understand what exactly is happening? mapreduce.yaml mapreduce: - name: Testmapper mapper: input_reader: mapreduce.input_readers.DatastoreInputReader handler: main.process params: - name: entity_kind default: main.userDetail mapreduce/main.py #some code class userDetail(db.Model): name = db.StringProperty() #some code def process(u): u.name="mahesh" yield op.db.Put(u) I am executing this and it gives me status = success in status page. But not able to understand what happend The main thing I want do with mapreduce is to search or count records from entity So anyone can please help me?? Thanks in advance

    Read the article

  • What's the best way to count unique visitors with Hadoop?

    - by beagleguy
    hey all, just getting started on hadoop and curious what the best way in mapreduce would be to count unique visitors if your logfiles looked like this... DATE siteID action username 05-05-2010 siteA pageview jim 05-05-2010 siteB pageview tom 05-05-2010 siteA pageview jim 05-05-2010 siteB pageview bob 05-05-2010 siteA pageview mike and for each site you wanted to find out the unique visitors for each site? I was thinking the mapper would emit siteID \t username and the reducer would keep a set() of the unique usersnames per key and then emit the length of that set. However that would be potentially storing millions of usernames in memory which doesn't seem right. Anyone have a better way? I'm using python streaming by the way thanks

    Read the article

  • Is there a dictionary about common programming vocabulary?

    - by _simon_
    When I need a name for a new class that extends behaviour of an existing class, I usually have hard time to come up with a name for it. For example, if I have a class MyClass, then the new class could be named something like MyClassAdapter, MyClassCalculator, MyClassDispatcher, MyClassParser,... This new name should of course represent the behaviour of the class and would ideally be same as the design pattern in which it is used (Adapter, Decorator, Factory,...). But since we don't overuse design patterns, this is not always the solution :) So, do you know for a dictionary or a list of common words, that we can use to represent the behaviour of the class, containing a short description of the expected behaviour? Some examples: replicator, shadow, token, acceptor, worker, mapper, driver, bucket, socket, validator, wrapper, parser, verifier,... You could also look at this list as a cheat sheet for metaphors, with which you can better understand your problem domain.

    Read the article

  • How to perform ant path mapping in war task?

    - by eljenso
    I have several JAR file pattern sets, like <patternset id="common.jars"> <include name="external/castor-1.1.jar" /> <include name="external/commons-logging-1.2.6.jar" /> <include name="external/itext-2.0.4.jar" /> ... </patternset> I also have a war task containing a lib element: ... Like this however, I end up with a WEB-INF/lib containing the subdirectories from my patterns: WEB-INF/lib/external/castor-1.1.jar WEB-INF/lib/external/... Is there any way to flatten this, so the JAR files appear top-level under WEB-INF/lib, regardless of the directories specified in the patterns? I looked at mapper but it seems you cannot use them inside lib.

    Read the article

  • SQLAlchemy - how to map against a read-only (or calculated) property

    - by Jeff Peck
    I'm trying to figure out how to map against a simple read-only property and have that property fire when I save to the database. A contrived example should make this more clear. First, a simple table: meta = MetaData() foo_table = Table('foo', meta, Column('id', String(3), primary_key=True), Column('description', String(64), nullable=False), Column('calculated_value', Integer, nullable=False), ) What I want to do is set up a class with a read-only property that will insert into the calculated_value column for me when I call session.commit()... import datetime def Foo(object): def __init__(self, id, description): self.id = id self.description = description @property def calculated_value(self): self._calculated_value = datetime.datetime.now().second + 10 return self._calculated_value According to the sqlalchemy docs, I think I am supposed to map this like so: mapper(Foo, foo_table, properties = { 'calculated_value' : synonym('_calculated_value', map_column=True) }) The problem with this is that _calculated_value is None until you access the calculated_value property. It appears that SQLAlchemy is not calling the property on insertion into the database, so I'm getting a None value instead. What is the correct way to map this so that the result of the "calculated_value" property is inserted into the foo table's "calculated_value" column?

    Read the article

  • ASP.NET MVC does not add ModelError when invoking from unit test

    - by Tomas Lycken
    I have a model item public class EntryInputModel { ... [Required(ErrorMessage = "Description is required.", AllowEmptyStrings = false)] public virtual string Description { get; set; } } and a controller action public ActionResult Add([Bind(Exclude = "Id")] EntryInputModel newEntry) { if (ModelState.IsValid) { var entry = Mapper.Map<EntryInputModel, Entry>(newEntry); repository.Add(entry); unitOfWork.SaveChanges(); return RedirectToAction("Details", new { id = entry.Id }); } return RedirectToAction("Create"); } When I create an EntryInputModel in a unit test, set the Description property to null and pass it to the action method, I still get ModelState.IsValid == true, even though I have debugged and verified that newEntry.Description == null. Why doesn't this work?

    Read the article

  • PIG doesn't read my custom InputFormat

    - by Simon Guo
    I have a custom MyInputFormat that suppose to deal with record boundary problem for multi-lined inputs. But when I put the MyInputFormat into my UDF load function. As follow: public class EccUDFLogLoader extends LoadFunc { @Override public InputFormat getInputFormat() { System.out.println("I am in getInputFormat function"); return new MyInputFormat(); } } public class MyInputFormat extends TextInputFormat { public RecordReader createRecordReader(InputSplit inputSplit, JobConf jobConf) throws IOException { System.out.prinln("I am in createRecordReader"); //MyRecordReader suppose to handle record boundary return new MyRecordReader((FileSplit)inputSplit, jobConf); } } For each mapper, it print out I am in getInputFormat function but not I am in createRecordReader. I am wondering if anyone can provide a hint on how to hoop up my costome MyInputFormat to PIG's UDF loader? Much Thanks. I am using PIG on Amazon EMR.

    Read the article

  • How to configure AutoMapper if not ASP.net Application?

    - by alex
    I'm using AutoMapper in a number of projects within my solution. These projects may be deployed independantly, across multiple servers. In the documentation for AutoMapper it says: If you're using the static Mapper method, configuration only needs to happen once per AppDomain. That means the best place to put the configuration code is in application startup, such as the Global.asax file for ASP.NET applications. Whilst some of the projects will be ASP.net - most of these are class libraries / windows services. Where should I be configuring my mappings in this case?

    Read the article

  • Same table NHibernate mapping

    - by mircea .
    How can i go about defining a same table relation mapping (mappingbycode) using Nhibernate for instance let's say I have a class: public class Structure{ public int structureId; public string structureName; public Structure rootStructure; } that references the same class as rootStructure. mapper.Class<Structure>(m => { m.Lazy(true); m.Id(u => u.structureId, map => { map.Generator(Generators.Identity); }); m.Property(c => c.structureName); m.? // Same table mapping } ; Thanks

    Read the article

  • Can i Automap a tree heirarchy with fluent nhibernate?

    - by NakChak
    Is it possible to auto map a simple nested object structure? Something like this public class Employee : Entity { public Employee() { this.Manages = new List<Employee>(); } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual bool IsLineManager { get; set; } public virtual Employee Manager { get; set; } public virtual IList<Employee> Manages { get; set; } } Causes the following error at run time: Repeated column in mapping for collection: SharpKtulu.Core.Employee.Manages column: EmployeeFk Is it possible to automap this sort of structure, or do i have over ride the auto mapper for this sort of structure?

    Read the article

  • Optional attribute values in MappedField

    - by David Brooks
    I'm new to Scala and Lift, coming from a slightly odd background in PLT Scheme. I've done a quick search on this topic and found lots of questions but no answers. I'm probably looking in the wrong place. I've been working my way through tutorials on using Mapper to create database-backed objects, and I've hit a stumbling block: what types should be used to stored optional attribute values. For example, a simple ToDo object might comprise a title and an optional deadline (e.g. http://rememberthemilk.com). The former would be a MappedString, but the latter could not be a MappedDateTime since the type constraints on the field require, say, defaultValue to return a Date (rather than a Date or null/false/???). Is an underlying NULL handled by the MappedField subclasses? Or are there optional equivalents to things like MappedInt, MappedString, MappedDateTime that allow the value to be NULL in the database? Or am I approaching this in the wrong way?

    Read the article

  • Can I Automap a tree hierarchy with Fluent NHibernate?

    - by NakChak
    Is it possible to auto map a simple nested object structure? Something like this: public class Employee : Entity { public Employee() { this.Manages = new List<Employee>(); } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual bool IsLineManager { get; set; } public virtual Employee Manager { get; set; } public virtual IList<Employee> Manages { get; set; } } It causes the following error at run time: Repeated column in mapping for collection: SharpKtulu.Core.Employee.Manages column: EmployeeFk Is it possible to automap this sort of structure, or do I have override the auto mapper for this sort of structure?

    Read the article

  • SQLAlchemy declarative syntax with autoload in Pylons

    - by Juliusz Gonera
    I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py): def init_model(engine): """Call me before using any of the tables or classes in the model""" t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine) orm.mapper(Event, t_events) Session.configure(bind=engine) class Event(object): pass This works fine, but I would like to use declarative syntax: class Event(Base): __tablename__ = 'events' __table_args__ = {'schema': 'events', 'autoload': True} Unfortunately, this way I get: sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine> The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding meta.Base.metadata.bind(engine) to environment.py but it doesn't work. Anyone has found some elegant solution?

    Read the article

  • Webservice and ORM Framework?

    - by Sebastian
    Does anybody know a good web framework that includes an ORM mapper and allows straight forward implementation of web services? I'm looking for a framework written in PHP or C++. I'm looking for the following features (not all of them required, some will do nicely) data definition in one place used by database and web service WSDL generation XML output/JSON output boilerplate code generation So what I would like is a framework that let's me specify the objects, the web service functions on those objects and then generate everything that is required leaving me to fill the business logic (connecting the database to the web service). Anything like that out there? Background information for why I need this: I'm looking into creating a web project: the client is a rich web application that fetches all its data using AJAX. It will be completely custom made using only a low level javascript library. The server back end is supposed to serve static content and javascript (basically the rich web application) and to provide a RESTful web service API (which I would like to implement using aforementioned framework).

    Read the article

  • OOP App Architecture: Which layer does a lazy loader sit in?

    - by JW
    I am planning the implemention an Inheritance Mapper pattern for an application component http://martinfowler.com/eaaCatalog/inheritanceMappers.html One feature it needs to have is for a domain object to reference a large list of aggreageted items (10,000 other domain objects) So I need some kind of lazy loading collection to be passed out of the aggregate root domain object to other domain objects. To keep my (php) model scripts organised i am storing them in two folders: MyComponent\ controllers\ models\ domain\ <- domain objects, DDD repository, DDD factory daccess\ <- PoEAA data mappers, SQL queries etc views\ But now I am racking my brains wondering where my lazy loading collection sits. Any suggestions / justifications for putting it in one place over another another? DDD = Domain Driven Design Patterns, Eric Evans - book PoEAA = Patterns of Application Architecture Patterns, Martin Fowler - book

    Read the article

  • Can I configure AutoMapper to read from custom column names when mapping from IDataReader?

    - by rohancragg
    Psuedo code for mapping configuration (as below) is not possible since the lambda only lets us access Type IDataReader, wheras when actually mapping, AutoMapper will reach into each "cell" of each IDataRecord while IDataReader.Read() == true: var mappingConfig = Mapper.CreateMap<IDataReader, IEnumerable<MyDTO>>(); mappingConfig.ForMember( destination => destination.???, options => options.MapFrom(source => source.???)); Can anyone think of a way to do this using AutoMapper configuration at runtime or just some other dynamic approach that meets the requirement below. The requirement is to support any incoming IDataReader which may have column names that don't match the property names of MyDTO and there is no naming convention I can rely on. Instead we'll ask the user at runtime to cross-reference the expected column names with the actual column names found in the IDataReader via IDataReader.GetSchemaTable().

    Read the article

  • AutoMapper and Linq expression.

    - by Raffaeu
    I am exposing the Dto generated from AutoMapper to my WCF services. I would like to offer something like that from WCF: IList GetPersonByQuery(Expression predicate); Unfortunately I need back an expression tree of Person as my DAL doesn't know the DTO. I am trying this wihtout success: var func = new Func<Person, bool>(x => x.FirstName.Contains("John")); var funcDto = Mapper.Map<Func<Person, bool>, Func<PersonDto, bool>>(func); Console.WriteLine(func.ToString()); Console.WriteLine(funcDto.ToString()); THe error that I get is: ----> System.ArgumentException : Type 'System.Func`2[TestAutoMapper.PersonDto,System.Boolean]' does not have a default constructor Do you have any suggestions?

    Read the article

  • Updating Many-to-Many relationship with LinqToSQL

    - by Noffie
    If I had, for example, a Many-to-Many mapping table called "RolesToUsers" between a Users and an Roles table, here is how I do it: // DataContext is db, usr is a User entity // newUserRolesMappings is a collection with the desired new mappings, probably // derived by looking at selections in a checkbox list of Roles on a User Edit page db.RolesToUsers.DeleteAllOnSubmit(usr.RolesToUsers); usr.RolesToUsers.Clear(); usr.RolesToUsers.AddRange(newUserRolesMappings); I used the SQL profiler once, and this seems to generate very intelligent SQL - it will only drop the rows which are no longer in the mapping relationship, and only add rows which did not already exist in the relationship. It doesn't blindly do a complete clearing and re-construction of the relationship, as I thought it would. The internet is surprisingly quiet on the subject, and the query "LinqToSQL many-to-many" mostly just turns up articles about how the LinqToSQL data mapper doesn't "support" it very well. How does everyone else update many-to-many with LinqToSQL?

    Read the article

  • Pass property to access using .NET

    - by BitFiddler
    I'm fairly sure this is possible, but what I want to do is have a generic method where I can pass in an object along with a Expression that will tell the method which Property to use in it's logic. Can anyone get me started on the syntax for something like this? Essentially what I would like to code is something like: Dim firstNameMapper as IColumnMapper = new ColumnMapper(of Author)(Function(x) x.FirstName) Dim someAuthorObject as new Author() fistNameMapper.Map("Richard", someAuthorObject) Now the mapper object would know to set the FirstName property to "Richard". Now using a Function here won't work, I know this... I'm just trying to give an idea what I'm trying to work towards. Thanks for any help!

    Read the article

  • Many-to-many relationship on same table with association object

    - by Nicholas Knight
    Related (for the no-association-object use case): http://stackoverflow.com/questions/1889251/sqlalchemy-many-to-many-relationship-on-a-single-table Building a many-to-many relationship is easy. Building a many-to-many relationship on the same table is almost as easy, as documented in the above question. Building a many-to-many relationship with an association object is also easy. What I can't seem to find is the right way to combine association objects and many-to-many relationships with the left and right sides being the same table. So, starting from the simple, naïve, and clearly wrong version that I've spent forever trying to massage into the right version: t_groups = Table('groups', metadata, Column('id', Integer, primary_key=True), ) t_group_groups = Table('group_groups', metadata, Column('parent_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('child_group_id', Integer, ForeignKey('groups.id'), primary_key=True, nullable=False), Column('expires', DateTime), ) mapper(Group_To_Group, t_group_groups, properties={ 'parent_group':relationship(Group), 'child_group':relationship(Group), }) What's the right way to map this relationship?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >