Search Results

Search found 14448 results on 578 pages for 'schema org'.

Page 398/578 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • KendoUI Mobile switch and datasource

    - by OnaBai
    I'm trying to have a list of items displayed using a listview, something like: <div data-role="view" data-model="my_model"> <ul data-role="listview" data-bind="source: ds" data-template="list-tmpl"></ul> </div> Where I have a view using a model called my_model and a listview where the source is bound to ds. My model is something like: var my_model = kendo.observable({ ds: new kendo.data.DataSource({ transport: { read: readData, update: updateData, create: updateData, remove: updateData }, pageSize: 10, schema: { model: { id: "id", fields: { id: { type: "number" }, name: { type: "string" }, active: { type: "boolean" } } } } }) }); Each item includes an id, a name (that is a string) and a boolean named active. The template used to render each element is: <script id="list-tmpl" type="text/kendo-tmpl"> <span>#= name # : #= active #</span> <input data-role="switch" data-bind="checked: active"/> </script> Where I display the name and (for debugging) the value of active. In addition, I render a switch bound to active. You should see something like: The problems observed are: If you click on a switch you will see that the value of active next to the name changes its value (as expected) but if then, you pick another switch, the value (neither next to name nor in the DataSource) is not updated (despite the value of the switch is correctly updated). The update handler in the DataSource is never invoked (not even for the first chosen switch and despite the DataSource for that first toggled switch is updated). You can check it in JSFiddle: http://jsfiddle.net/OnaBai/K7wEC/ How do I make that the DataSource gets updated and update handler invoked?

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • Mapping Cassandra Super Columns

    - by Laubstein
    Hello dudes, I guess everybody that played with Cassandra already read this article. I trying to create my schema on CassandraCli, but I am having a lot of problems, can someone guide me to the right way? I am trying to create a similar structure like the Comments column family from the article. In CassandraCli terminal I type: create column family posts with column_type = ‘Super’ and comparator = ‘AsciiType’ and subcomparator = TimeUUIDType; It works fine, there is no doc telling me that if I add a column_metadata attribute those will be for the super columns cause my column family is of type super, i can’t find if it is true so: create column family posts with column_type = ‘Super’ and comparator = ‘AsciiType’ and subcomparator = ‘TimeUUIDType’ and column_metadata = [{column_name:'body'}]; I am trying to create the same as the comment column family of the article, but when i try to populate set posts['post1'][timeuuid()][body] = ‘Hello I am Goku!’; i got: Invalid UUID string: body I guess because i chose the subcomparator be of type timeuuid and the body is a string, it should be a timeuuid, so HOW my columns inside the super column which is the type timeuuid could holds columns with string type names as the comments of the article are created? Thanks

    Read the article

  • How do I create and use a junction table in Rails?

    - by Thierry Lam
    I have the following data: A post called Hello has categories greet Another post called Hola has categories greet, international My schema is: create_table "posts", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "categories", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "posts_categories", :force => true do |t| t.integer "post_id" t.integer "category_id" t.datetime "created_at" t.datetime "updated_at" end After reading the Rails guide, the most suitable relationship for the above seems to be: class Post < ActiveRecord::Base has_and_belongs_to_many :categories end class Category < ActiveRecord::Base has_and_belongs_to_many :posts end My junction table also seems to have a primary key. I think I need to get rid of it. What's the initial migration command to generate a junction table in Rails? What's the best course of action, should I drop posts_categories and re-create it or just drop the primary key column? Does the junction table have a corresponding model? I have used scaffold to generate the junction table code, should I get rid of the extra code? Assuming all the above has been fixed and is working properly, how do I query all posts and display them along with their named categories in the view. For example: Post #1 - hello, categories: greet Post #2 - hola, categories: greet, international

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Is there a problem with this MySQL Query?

    - by ThinkingInBits
    http://pastie.org/954073 The echos at the top all display valid data that fit the database schema. There are no connection errors Any ideas? Thanks in advance! Here is the echo'ed query INSERT INTO equipment (cat_id, name, year, manufacturer, model, price, location, condition, stock_num, information, description, created, modified) VALUES (1, 'r', 1, 'sdf', 'sdf', '2', 'd', 'd', '3', 'asdfasdfdf', 'df', '10 May 10', '10 May 10') MySQL is giving: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'condition, stock_num, information, description, created, modified) VALUES (1, 'r' at line 1 id int(11) unsigned NO PRI NULL auto_increment Edit Delete cat_id int(11) unsigned NO NULL Edit Delete prod_name varchar(255) YES NULL Edit Delete prod_year varchar(10) YES NULL Edit Delete manufacturer varchar(255) YES NULL Edit Delete model varchar(255) YES NULL Edit Delete price varchar(10) YES NULL Edit Delete location varchar(255) YES NULL Edit Delete condition varchar(25) YES NULL Edit Delete stock_num varchar(128) YES NULL Edit Delete information text YES NULL Edit Delete description text YES NULL Edit Delete created varchar(20) YES NULL Edit Delete modified varchar(20) YES NULL Query: INSERT INTO equipment (cat_id, prod_name, prod_year, manufacturer, model, price, location, condition, stock_num, information, description, created, modified) VALUES (1, 'asdf', '234', 'adf', 'asdf', '34', 'asdf', 'asdf', '234', 'asdf', 'asdf', '10 May 10', '10 May 10')

    Read the article

  • using a JOIN in an UPDATE in SQL

    - by SDLFunTimes
    Hi, I'm having trouble formulating a legal statement to double the statuses of the suppliers (s) who have shipped (sp) more than 500 units. I've been trying: update s set s.status = s.status * 2 from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500; however I'm getting this error from Mysql: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500' at line 1 Does anyone have any ideas about what is wrong with this query? Here's my schema: create table s ( sno char(5) not null, sname char(20) not null, status smallint, city char(15), primary key (sno) ); create table p ( pno char(6) not null, pname char(20) not null, color char(6), weight smallint, city char(15), primary key (pno) ); create table sp ( sno char(5) not null, pno char(6) not null, qty integer not null, primary key (sno, pno) );

    Read the article

  • Building a News-feed that comprises posts "created by user's connections" && "on the topics user is following"

    - by aklin81
    I am working on a project of Questions & Answers website that allows a user to follow questions on certain topics from his network. A user's news-feed wall comprises of only those questions that have been posted by his connections and tagged on the topics that he is following(his expertise topics). I am confused what database's datamodel would be most fitting for such an application. The project needs to consider the future provisions for scalability and high performance issues. I have been looking at Cassandra and MySQL solutions as of now. After my study of Cassandra I realized that Simple news-feed design that shows all the posts from network would be easy to design using Cassandra by executing fast writes to all followers of a user about the post from user. But for my kind of application where there is an additional filter of 'followed topics', (ie, the user receives posts "created by his network" && "on topics user is following"), I could not convince myself with a good schema design in Cassandra. I hope if I missed something because of my short understanding of cassandra, perhaps, can you please help me out with your suggestions of how this news-feed could be implemented in Cassandra ? Looking for a great project with Cassandra ! Edit: There are going to be maximum 5 tags allowed for tagging the question (ie, max 5 topics can be tagged on a question).

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • Android - Dealing with a Dialog on Screen Orientation change

    - by Donal Rafferty
    I am overriding the onCreateDialog and onPrepareDialog methods or the Dialog class. I have followed the example from Reto Meier's Professional Android Application Development book, Chapter 5 to pull some XML data and then use a dialog to display the info. I have basically followed it exactly but changed the variables to suit my own XML schema as follows: @Override public Dialog onCreateDialog(int id) { switch(id) { case (SETTINGS_DIALOG) : LayoutInflater li = LayoutInflater.from(this); View settingsDetailsView = li.inflate(R.layout.details, null); AlertDialog.Builder settingsDialog = new AlertDialog.Builder(this); settingsDialog.setTitle("Provisioned Settings"); settingsDialog.setView(settingsDetailsView); return settingsDialog.create(); } return null; } @Override public void onPrepareDialog(int id, Dialog dialog) { switch(id) { case (SETTINGS_DIALOG) : String afpunText = " "; if(setting.getAddForPublicUserNames() == 1){ afpunText = "Yes"; } else{ afpunText = "No"; } String Text = "Login Settings: " + "\n" + "Password: " + setting.getPassword() + "\n" + "Server: " + setting.getServerAddress() + "\n"; AlertDialog settingsDialog = (AlertDialog)dialog; settingsDialog.setTitle(setting.getUserName()); tv = (TextView)settingsDialog.findViewById(R.id.detailsTextView); if (tv != null) tv.setText(Text); break; } } It works fine until I try changing the screen orientation, When I do this onPrepareDialog gets call but I get null pointer exceptions on all my variables. The error still occurs even when I tell my activity to ignore screen orientation in the manifest. So I presume something has been left out of the example in the book do I need to override another method to save my variables in or something?

    Read the article

  • getting proxies of the correct type in nhibernate

    - by Nir
    I have a problem with uninitialized proxies in nhibernate The Domain Model Let's say I have two parallel class hierarchies: Animal, Dog, Cat and AnimalOwner, DogOwner, CatOwner where Dog and Cat both inherit from Animal and DogOwner and CatOwner both inherit from AnimalOwner. AnimalOwner has a reference of type Animal called OwnedAnimal. Here are the classes in the example: public abstract class Animal { // some properties } public class Dog : Animal { // some more properties } public class Cat : Animal { // some more properties } public class AnimalOwner { public virtual Animal OwnedAnimal {get;set;} // more properties... } public class DogOwner : AnimalOwner { // even more properties } public class CatOwner : AnimalOwner { // even more properties } The classes have proper nhibernate mapping, all properties are persistent and everything that can be lazy loaded is lazy loaded. The application business logic only let you to set a Dog in a DogOwner and a Cat in a CatOwner. The Problem I have code like this: public void ProcessDogOwner(DogOwner owner) { Dog dog = (Dog)owner.OwnedAnimal; .... } This method can be called by many diffrent methods, in most cases the dog is already in memory and everything is ok, but rarely the dog isn't already in memory - in this case I get an nhibernate "uninitialized proxy" but the cast throws an exception because nhibernate genrates a proxy for Animal and not for Dog. I understand that this is how nhibernate works, but I need to know the type without loading the object - or, more correctly I need the uninitialized proxy to be a proxy of Cat or Dog and not a proxy of Animal. Constraints I can't change the domain model, the model is handed to me by another department, I tried to get them to change the model and failed. The actual model is much more complicated then the example and the classes have many references between them, using eager loading or adding joins to the queries is out of the question for performance reasons. I have full control of the source code, the hbm mapping and the database schema and I can change them any way I want (as long as I don't change the relationships between the model classes). I have many methods like the one in the example and I don't want to modify all of them. Thanks, Nir

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

  • How to switch from Core Data automatic lightweight migration to manual?

    - by Jaanus
    My situation is similar to this question. I am using lightweight migration with the following code, fairly vanilla from Apple docs and other SO threads. It runs upon app startup when initializing the Core Data stack. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; NSString *storeType = nil; if (USE_SQLITE) { // app configuration storeType = NSSQLiteStoreType; } else { storeType = NSBinaryStoreType; } persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; // the following line sometimes crashes on app startup if (![persistentStoreCoordinator addPersistentStoreWithType:storeType configuration:nil URL:[self persistentStoreURL] options:options error:&error]) { // handle the error } For some users, especially with slower devices, I have crashes confirmed by logs at the indicated line. I understand that a fix is to switch this to manual mapping and migration. What is the recipe to do that? The long way for me would be to go through all Apple docs, but I don't recall there being good examples and tutorials specifically for schema migration.

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • c# web extracting programming, which libraries, examples samples please

    - by user287745
    I have just started programming and have made a few small applications in C and C#. My understanding is that programming for web and thing related to web is nowadays a very easy task. Please note this is for personnel learning, not for rent a coder or any money making. An application which can run on any Windows platform even Windows 98. The application should start automatically at a scheduled time and do the following. Connect to a site which displays stock prices summary (high low current open). Capture the data (excluding the other things in the site.) And save it to disk (an SQL database) Please note:- Internet connection is assumed to be there always. Do not want to know how to make database schema or database. The stock exchange has no law prohibiting the use of the data provided on its site, but I do not want to mention the name in case I am wrong, but it's for personal private use only. The data of summary of pricing is arranged in a table such that when copied pasted to MS Excel it automatically forms a table. need steps guidance please, examples, lbraries

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • Embedded non-relational (nosql) data store

    - by Igor Brejc
    I'm thinking about using/implementing some kind of an embedded key-value (or document) store for my Windows desktop application. I want to be able to store various types of data (GPS tracks would be one example) and of course be able to query this data. The amount of data would be such that it couldn't all be loaded into memory at the same time. I'm thinking about using sqlite as a storage engine for a key-value store, something like y-serial, but written in .NET. I've also read about FriendFeed's usage of MySQL to store schema-less data, which is a good pointer on how to use RDBMS for non-relational data. sqlite seems to be a good option because of its simplicity, portability and library size. My question is whether there are any other options for an embedded non-relational store? It doesn't need to be distributable and it doesn't have to support transactions, but it does have to be accessible from .NET and it should have a small download size. UPDATE: I've found an article titled SQLite as a Key-Value Database which compares sqlite with Berkeley DB, which is an embedded key-value store library.

    Read the article

  • How to make a mapped field inherited from a superclass transient in JPA?

    - by Russ Hayward
    I have a legacy schema that cannot be changed. I am using a base class for the common features and it contains an embedded object. There is a field that is normally mapped in the embedded object that needs to be in the persistence id for only one (of many) subclasses. I have made a new id class that includes it but then I get the error that the field is mapped twice. Here is some example code that is much simplified to maintain the sanity of the reader: @MappedSuperclass class BaseClass { @Embedded private Data data; } @Entity class SubClass extends BaseClass { @EmbeddedId private SubClassId id; } @Embeddable class Data { private int location; private String name; } @Embeddable class SubClassId { private int thingy; private int location; } I have tried @AttributeOverride but I can only get it to rename the field. I have tried to set it to updatable = false, insertable = false but this did not seem to work when used in the @AttributeOverride annotation. See answer below for the solution to this issue. I realise I could change the base class but I really do not want to split up the embedded object to separate the shared field as it would make the surrounding code more complex and require some ugly wrapping code. I could also redesign the whole system for this corner case but I would really rather not. I am using Hibernate as my JPA provider.

    Read the article

  • GUI for generating XML

    - by Kenston
    Hello. Does anybody know of GUIs for generating XMLs? This means the user will not be presented with an IDE with support for XML for him to type XML codes. This would be helpful for non-technical people using the system. I know this sounds easy, given many libraries that can help in generating XMLs. The issue here is that the schema is really that flexible rather than being straightforward like representing books in a library with their properties. Imagine HTML, where we can create font tags inside a body, a table, a div, or nested even within itself. The solution is a WYSIWYG application that allows user to generate html codes (XML). However, that is good for XML applied in webpages since they involve visual aspect and design. My application of XML would focus on representing some conceptual and computational definitions, much like sql-like syntax, but more than that. I'm actually after the approach or previous works done or tried, although having a library/working framework for that would be better. Btw, I'm using Java for this project. Currently, I'm just thinking of presenting element tags where user will be able to drag and drop them and nest them with each other. And perhaps, assist them through a forms in inputing values for XML attributes. I'm still hoping if there are better ideas from the community. Thank you.

    Read the article

  • Entity Framework 4 / POCO - Where to start?

    - by Basiclife
    Hi, I've been programming for a while and have used LINQ-To-SQL and LINQ-To-Entities before (although when using entities it has been on a Entity/Table 1-1 relationship - ie not much different than L2SQL) I've been doing a lot of reading about Inversion of Control, Unit of Work, POCO and repository patterns and would like to use this methodology in my new applications. Where I'm struggling is finding a clear, concise beginners guide for EF4 which doesn't assume knowledge of EF1. The specific questions I need answered are: Code first / model first? Pros/cons in regards to EF4 (ie what happens if I do code first, change the code at a later date and need to regenerate my DB model - Does the data get preserved and transformed or dropped?) Assuming I'm going code-first (I'd like to see how EF4 converts that to a DB schema) how do I actually get started? Quite often I've seen articles with entity diagrams stating "So this is my entity model, now I'm going to ..." - Unfortunately, I'm unclear if they're created the model in the designer, saved it to generate code then stopped any further auto-code generation -or- They've coded (POCO)? classes and the somehow imported them into the deisgner view? I suppose what I really need is an understanding of where the "magic" comes from and how to add it myself if I'm not just generating an EF model directly from a DB. I'm aware the question is a little vague but I don't know what I don't know - So any input / correction / clarification appreciated. Needless to say, I don't expect anyone to sit here and teach me EF - I'd just like some good tutorials/forums/blogs/etc. for complete entity newbies Many thanks in advance

    Read the article

  • Doctrine 1.2: How do i prevent a contraint from being assigned to both sides of a One-to-many relati

    - by prodigitalson
    Is there a way to prevent Doctrine from assigning a contraint on both sides of a one-to-one relationship? Ive tried moving the definition from one side to the other and using owning side but it still places a constraint on both tables. when I only want the parent table to have a constraint - ie. its possible for the parent to not have an associated child. For example iwant the following sql schema essentially: CREATE TABLE `parent_table` ( `child_id` varchar(50) NOT NULL, `id` integer UNSIGNED NOT NULL auto_increment, PRIMARY KEY (`id`) ); CREATE TABLE `child_table` ( `id` integer UNSIGNED NOT NULL auto_increment, `child_id` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`child_id`), CONSTRAINT `parent_table_child_id_FK_child_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `parent_table` (`child_id`) ); However im getting something like this: CREATE TABLE `parent_table` ( `child_id` varchar(50) NOT NULL, `id` integer UNSIGNED NOT NULL auto_increment, PRIMARY KEY (`id`), CONSTRAINT `child_table_child_id_FK_parent_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `child_table` (`child_id`) ); CREATE TABLE `child_table` ( `id` integer UNSIGNED NOT NULL auto_increment, `child_id` varchar(50) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`child_id`), CONSTRAINT `parent_table_child_id_FK_child_table_child_id` FOREIGN KEY (`child_id`) REFERENCES `parent_table` (`child_id`) ); I could just remove the constraint manually or modify my accessors to return/set a single entity in the collection (using a one-to-many) but it seems like there should built in way to handle this. Also im using Symfony 1.4.4 (pear installtion ATM) - in case its an sfDoctrinePlugin issue and not necessarily Doctrine itself.

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >