Search Results

Search found 3534 results on 142 pages for 'uv mapping'.

Page 102/142 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Routing WCF Traffic Based on URI Domain Requested

    - by Ian Patrick Hughes
    Is there a way to route traffic to a target WCF service file based on the URL domain requested? Basically, I have a single WCF RESTful services project with 3 service files offering different endpoints. It's hosted on a single IIS6 site looking for multiple host header values on port 80. I want to route traffic to different services files whether the requester is asking for www.site1.com, www.site2.com, or www.site3.com. Seems like the sort of thing I would use a global.asax or HTTP Handler for, but I am not sure since this is a regular WCF Service Application. Even though I am on IIS6 for this project, I don't mind using a URL re-writer and wildcard mapping, if I have to. I have admin rights on the balanced servers where this will reside, I just want to know if there is a common/best practice before I start hacking my way around this.

    Read the article

  • @OneToOne and @JoinColumn, auto delete null entity , doable?

    - by smallufo
    I have two Entities , with the following JPA annotations : @Entity @Table(name = "Owner") public class Owner implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private long id; @OneToOne(fetch=FetchType.EAGER , cascade=CascadeType.ALL) @JoinColumn(name="Data_id") private Data Data; } @Entity @Table(name = "Data") public class Data implements Serializable { @Id private long id; } Owner and Data has one-to-one mapping , the owning side is Owner. The problem occurs when I execute : owner.setData(null) ; ownerDao.update(owner) ; The "Owner" table's Data_id becomes null , that's correct. But the "Data" row is not deleted automatically. I have to write another DataDao , and another service layer to wrap the two actions ( ownerDao.update(owner) ; dataDao.delete(data); ) Is it possible to make a data row automatically deleted when the owning Owner set it to null ?

    Read the article

  • Table-per-type inheritance insert problem

    - by gzak
    I followed this article on making a table-per-type inheritance model for my entities, but I get the following error when I try to add an instance of a subclass to the database. Here is how I create the subtype: var cust = db.Users.CreateObject<Customer>(); // Customer inherits User db.Users.AddObject(cust); db.SaveChanges(); I get the following error when I make that last call: "A value shared across entities or associations is generated in more than one location. Check that mapping does not split an EntityKey to multiple store-generated columns." With the following inner exception: "An item with the same key has already been added." Any ideas on what I could be missing?

    Read the article

  • rails g migration error

    - by user1506183
    I don't know what to do. Try to use command $ rails g migration vacancy but this command give me error: invoke active_record /home/proger/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/psych.rb:203:in `parse': (<unknown>): mapping values are not allowed in this context at line 21 column 11 (Psych::SyntaxError) from /home/proger/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/psych.rb:203:in `parse_stream' from /home/proger/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/psych.rb:151:in `parse' ... There are many rows in error code I don't know how to fix that Thanks

    Read the article

  • Mixing JPA annotations and XML configuration

    - by HDave
    I have a fairly large (new) project in which we have annotated many domain classes with JPA mappings. Now it is time to implement many named queries -- some entities may have as many as 15-20 named queries. I am thinking that writing these named queries in annotations will clutter the source files and therefore am considering putting these in XML mapping files. Is this possible? Mort importantly, is this reasonable? Are there better approaches? How is this done?

    Read the article

  • Unnecessary 'else' statement

    - by Vitalii Fedorenko
    As you know, in Eclipse you can turn on "Unnecessary 'else' statement" check that will trigger on if-then-else with premature return. And, from my experience, there are two most possible situations when use such statement: 1) Pre-check: if (validate(arg1)) { return false; } doLotOfStuff(); 2) Post-check: doLotOfStuff(); if (condition) { return foo; } else { return bar; } In the second case, if the trigger is on, Eclipse will suggest you to change the code to: doLotOfStuff(); if (condition) { return foo; } return bar; However, I think that the return with else statement is more readable as it is like direct mapping of business logic. So I am curios if this "Unnecessary 'else' statement" code convention is widespread or else statement is more preferable?

    Read the article

  • Removing the Default Wrap Character From all records

    - by aceinthehole
    I am using BizTalk 2009 and I have a flat file that is similar to the following "0162892172","TIM ","LastName ","760 "," ","COMANCHE ","LN " "0143248282","GEORGE ","LastName ","625 "," ","ENID ","AVE " When I parse it and start mapping it I need to get rid of the quotation marks. I have marked the Wrap Character attribute for the schema as a quotation mark but it doesn't remove it when BizTalk is parsing the file. Is there an easy way to specify the removal of a wrap character or am I going to have to run it through a script functiod every time? Also I would like to be able to remove the trailing spaces as well, if at all possible.

    Read the article

  • Spring-Hibernate: How to submit a for when the object has one-to-many relations?

    - by Czar
    Hi, I have a form changeed the properties of my object CUSTOMER. Each customer has related ORDERS. The ORDER's table has a column customer_id which is used for the mapping. All works so far, I can read customers without any problem. When I now e.g. change the name of the CUSTOMER in the form (which does NOT show the orders), after saving the name is updated, but all relations in the ORDERS table are set to NULL (the customer_id for the items is set to NULL. How can I keep the relationship working? THX

    Read the article

  • Making OR/M loosely coupled and abstracted away from other layers.

    - by Genuine
    Hi all. In an n-tier architecture, the best place to put an object-relational mapping (OR/M) code is in the data access layer. For example, database queries and updates can be delegated to a tool like NHibernate. Yet, I'd like to keep all references to NHibernate within the data access layer and abstract dependencies away from the layers below or above it. That way, I can swap or plug in another OR/M tool (e.g. Entity Framework) or some approach (e.g. plain vanilla stored procedure calls, mock objects) without causing compile-time errors or a major overhaul of the entire application. Testability is an added bonus. Could someone please suggest a wrapper (i.e. an interface or base class) or approach that would keep OR/M loosely coupled and contained in 1 layer? Or point me to resources that would help? Thanks.

    Read the article

  • Fluent nHibernate - How to map a non-key column on an association table?

    - by The Matt
    Taking an example that is provided on the Fluent nHibernate website, I need to extend it slightly: I need to add a 'Quantity' column to the StoreProduct table. How would I map this using nHibernate? An example mapping is provided for the given scenario above, but I'm not sure how I would get the Quantity column to map: public class StoreMap : ClassMap<Store> { public StoreMap() { Id(x => x.Id); Map(x => x.Name); HasMany(x => x.Employee) .Inverse() .Cascade.All(); HasManyToMany(x => x.Products) .Cascade.All() .Table("StoreProduct"); } }

    Read the article

  • C++ library for Coordinate Transformation Matrices (CTM)?

    - by BastiBense
    I'm looking for a C++ library which allows for easy integration of Coordinate Transformation Matrices (CTM) in my application. You might know CTMs from PDF or PostScript. For one project we are using C++/Qt4 as a framework, which offers a QTransform class, which provides methods like .translate(double x, double y) or .rotate(double degrees). After doing some transformations, it would allow me to get all 6 CTM values, which I could feed into a PDF library or use a transformation matrix in export files. Qt's API also allows for arbitrary mapping of polygons (QPolygon), rectangles (QRect) and other primitive data structures into transformed coordinate systems. So basically I'm looking for something similar to what Qt provides, but without the need of using Qt. I know I could do the matrix multiplications myself, but I'm not really interested in doing so, as I'm very sure that someone already solved this problem, so please no links to books or other guides on how to multiply matrices. Thanks!

    Read the article

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

  • Adding a subdomain to my google app engine project?

    - by user246114
    Hi, I created a google app engine project. I just successfully mapped it to a new domain. The name of my project is "grape". So by default, it is published at http://www.grape.appspot.com. I mapped it to http://www.grape.com, which is terrific. Now I'd like to create a new app engine project, and have it mapped to: http://api.grape.com how do I go about doing this? I think it is possible, I'm just not sure where I would do this mapping? Since I own grape.com, I am hoping I can map a new project to t. The basic idea was to have one project which is responsible for the UI stuff, then a second project responsible just for a public api, which would be great, Thanks

    Read the article

  • Error 3032 during EF 4.0 validation

    - by Mohammadreza
    Hi guys. I have a table that has a column called "rowguid" with default value of newguid(). After I delete the generated property in the Entity Framework 4.0 designer, whenever I try to build or validate the model I receive the following error message. Error 3023: Problem in mapping fragments starting at line 1460:Column People.rowguid in table People must be mapped: It has no default value and is not nullable. It's an obvious error message but the problem is that the column HAS default value. Does anyone knows what the problem is? Thanks.

    Read the article

  • How to access the backing field of an inherited class using fluent nhibernate

    - by Akk
    How do i set the Access Strategy in the mapping class to point to the inherited _photos field? public class Content { private IList<Photo> _photos; public Content() { _photos = new List<Photo>(); } public virtual IEnumerable<Photo> Photos { get { return _photos; } } public virtual void AddPhoto() {...} } public class Article : Content { public string Body {get; set;} } I am currently using thw following to try and locate the backing field but an exception is thrown as it cannot be found. public class ArticleMap : ClassMap<Article> { HasManyToMany(x => x.Photos) .Access.CamelCaseField(Prefix.Underscore) //_photos //... } i tried moving the backing field _photos directly into the class and the access works. So how can i access the backing field of an inherited class?

    Read the article

  • fluent nhibernate not caching queries in asp.net mvc

    - by AWC
    I'm using a fluent nhibernate with asp.net mvc and I not seeing anything been cached when making queries against the database. I'm not currently using an L2 cache implementation. Should I see queries being cached without configuring an out of process L2 cache? Mapping are like this: Table("ApplicationCategories"); Not.LazyLoad(); Cache.ReadWrite().IncludeAll(); Id(x => x.Id); Map(x => x.Name).Not.Nullable(); Map(x => x.Description).Nullable(); Example Criteria: return session .CreateCriteria<ApplicationCategory>() .Add(Restrictions.Eq("Name", _name)) .SetCacheable(true); Everytime I make a request for an application cateogry by name it is hitting the database is this expected behaviour?

    Read the article

  • Call a statement from resultMap->result iBatis

    - by Vinay
    Hi All, Please tell me is it correct configuration in given below. If there are mistake please reply. select * from PAYMENT where ORDER_ID = #ordId# and CUST_ID = #ordCustId# select * from PRODORDER where ord_id = #value# I am getting exceptions - com.ibatis.common.jdbc.exception.NestedSQLException: --- The error occurred in conf/sql-map.xml. --- The error occurred while applying a result map. --- Check the employee.orderResult. --- Check the result mapping for the 'payments' property. --- Cause: com.ibatis.sqlmap.client.SqlMapException: There is no statement named getOrderPayments in this SqlMap.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • How can I plot a time series graph with Perl?

    - by Jazz
    I have some data from a database (SQLite), mapping a value (an integer) to a date. A date is a string with this format: YYYY-MM-DD hh:mm. The dates are not uniformly distributed. I want do draw a line graph with the dates on X and the values on Y. What is the easiest way to do this with Perl? I tried DBIx::Chart but I could not make it recognize my dates. I also tried GD::Graph, but as the documentation says: GD::Graph does not support numerical x axis the way it should. Data for X axes should be equally spaced

    Read the article

  • Random access view in boost::multi_array

    - by linai
    Here is a boost example: typedef boost::multi_array<double, 1> array_type; typedef array_type::index index; array_type A(boost::extents[100]); for(index i = 0; i != A.size(); ++i) { A[i] = (double)i; } // creating view array_type::index_gen indices; typedef boost::multi_array_types::index_range range; array_type::array_view<1>::type myview = A[ indices[range(0,50)] ]; What this code does is creating a subarray or view mapping onto the original array. This view is continuous and covers from 0th to 50th elements of an original array. What if I need to explicitly define elements I'd like to see in the view? How can I create a view with indices like [1, 5, 35, 23] ? Any ideas?

    Read the article

  • How can I "override" deepcopy in Python?

    - by Az
    Hi there, I'd like to override __deepcopy__ for a given SQLAlchemy-mapped class such that it ignores any SQLA attributes but deepcopies everything else that's part of the class. I'm not particularly familiar with overriding any of Python's built-in objects in particular but I've got some idea as to what I want. Let's just make a very simple class User that's mapped using SQLA. class User(object): def __init__(self, user_id, name): self.user_id = user_id self.name = name I've used dir() to see, before and after mapping, what SQLAlchemy-specific attributes there are and I've found _sa_class_manager and _sa_instance_state. Provided those are the only ones how would I ignore that when defining __deepcopy__? Also, are there any attributes the SQLA injects into the mapped object? (I asked this in a previous question (as an edit a few days after I selected an answer to the main question, though) but I think I missed the train there. Apologies for that.)

    Read the article

  • Matlab: How to find the right-most point of a white line within the imrect?

    - by mchlfchr
    i've got a question regarding the imrect() function, which is part of the image processing toolbox in MatLab. I'd like to find a starting point within an image. I use the imrect function for setting a region to limit and specify the lookup area, but I can't get the point where the ROI mask is getting back mapped to the original size of the image. As you can see on the image there is a specified rectangle (cyan-colored), which I want to inspect for the white line, especially the nearest point to the right edge of the rectangle. I experimentated with only looking up on the last column of the rectangle, but as I mentioned before, the re-mapping onto the global image coordinates failed. So in this example, the white point I'd like to get would be around (98,302) The original (x,y) coordinates are relevant, so a cropping of the image to the rectangle is not acceptable. So, do you have any ideas? Thanks for any helping comments. Kind regards,

    Read the article

  • IIS browse directory problem on a virtual directory

    - by user335518
    I have two differents virtual directories mapping to the same directory on the OS. In one of this virtual directories I need to have the browse folders disable, and in the other one I need to have it enable. The problem is that when I changed one of them the other change as well. I thinks this problem is related that both virtual directories points to the same folder in the OS, but with the IIS6 I had this same configuration with out a problem. Any idea of a work around with this? Thanks!

    Read the article

  • Handling changes in column order when importing CSV files

    - by Scott
    I have a CSV file. The first row will always contain column headers. Depending on a variety of factors, the order of columns may change and, in rare circumstances, some columns may not be present. These changes are beyond my control. My thoughts, so far, on how to address this. I'll read the first row of the file and use the values to generate a list of columns contained in the source file. The destination file will use the same column names as the source. This should be as simple as searching for identical names in the source and destination, then just mapping the column index values, right? What are your recommendations for handling this?

    Read the article

  • Usual hibernate performance pitfall

    - by Antoine Claval
    Hi, We have just finish to profile our application. ( she's begin to be slow ). the problem seems to be "in hibernate". It's a legacy mapping. Who work's, and do it's job. The relational shema behind is ok too. But some request are slow as hell. So, we would appreciate any input on common and usual mistake made with hibernate who end up with slow response. Exemple : Eager in place of Lazy can change dramaticly the response time....

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >