Search Results

Search found 17914 results on 717 pages for 'xml schema'.

Page 504/717 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • C#, create virtual directory on remote system

    - by sankar
    The following code create only virtual directory on local system , but i need to create on remote sytem ..help me.. Thanks, Sankar DirectoryEntry iisServer; string VirDirSchemaName = "IIsWebVirtualDir"; public DirectoryEntry Connect() { try { if (txtPath.Text.ToLower().Trim() == "localhost") iisServer = new DirectoryEntry("IIS://" + txtPath.Text.Trim() + "/W3SVC/1/Root"); else iisServer = new DirectoryEntry("IIS://" + txtPath.Text + "/Schema/AppIsolated", "XYZ", "xyz"); iisServer.Dispose(); } catch (Exception e) { throw new Exception("Could not connect to: " + txtPath.Text.Trim(), e); } return iisServer; } public void CreateVirtualDirectory(DirectoryEntry iisServer) { DirectoryEntry folderRoot = new DirectoryEntry("IIS://" + txtPath.Text + "/W3SVC/1/Root", "XYZ", "xyz"); folderRoot.RefreshCache(); folderRoot.CommitChanges(); try { DirectoryEntry newVirDir = folderRoot.Children.Add(txtName.Text, VirDirSchemaName); newVirDir.CommitChanges(); newVirDir.Properties["AccessRead"].Add(true); newVirDir.Properties["Path"].Add(@"\\abc\abc"); newVirDir.Invoke("AppCreate", true); newVirDir.CommitChanges(); folderRoot.CommitChanges(); newVirDir.Close(); folderRoot.CommitChanges(); } catch (Exception e) { throw new Exception("Error! Virtual Directory Not Created", e); } } protected void btnCreate_Click(object sender, EventArgs e) { try { CreateVirtualDirectory(Connect()); } catch (Exception ex) { Response.Write(ex.Message); } } protected void Page_Load(object sender, EventArgs e) { }

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • VSDBCMD deployment for additions to third party databases

    - by Sam
    We have some custom objects (stored procedures etc) in an SQL Server 2005 database belonging to an ERP system. The custom objects are in different schemas to the ERP objects. We're using Database Edition .dbproj projects and vsdbcmd deployment for all our custom application databases and would like to similarly manage our custom objects in the ERP database. It's not clear how this can be done without either: Importing all ERP objects (~4000 tables) into the .dbproj and manually keeping them in sync with ERP development. Visual Studio fell over the only time I tried importing these, so I've no idea whether it can actually handle a project of this size. Somehow excluding the ERP schemas (there are two) from the diff process to ensure they don't get dropped by vsdbcmd. I haven't found any documentation which suggests this is possible. I'm aware of the IgnoreDefaultSchema setting, but there are two schemas I need to ignore and I'm not comfortable with the 'default schema' approach - deployment by different users could be disasterous. Has anyone managed to successfully use .dbproj & vsdbcmd for custom additions to a third party database? If not, how do you manage SQL source control & deployment?

    Read the article

  • SharePoint 2007 and SiteMinder

    - by pborovik
    Here is a question regarding some details how SiteMinder secures access to the SharePoint 2007. I've read a bunch of materials regarding this and have some picture for SharePoint 2010 FBA claims-based + SiteMinder security (can be wrong here, of course): SiteMinder is registered as a trusted identity provider for the SharePoint; It means (to my mind) that SharePoint has no need to go into all those user directories like AD, RDBMS or whatever to create a record for user being granted access to SharePoint - instead it consumes a claims-based id supplied by SiteMinder SiteMinder checks all requests to SharePoint resources and starts login sequence via SiteMinder if does not find required headers in the request (SMSESSION, etc.) SiteMinder creates a GenericIdentity with the user login name if headers are OK, so SharePoint recognizes the user as authenticated But in the case of SharePoint 2007 with FBA + SiteMinder, I cannot find an answer for questions like: Does SharePoint need to go to all those user directories like AD to know something about users (as SiteMinder is not in charge of providing user info like claims-based ids)? So, SharePoint admin should configure SharePoint FBA to talk to these sources? Let's say I'm talking to a Web Service of SharePoint protected by SiteMinder. Shall I make a Authentication.asmx-Login call to create a authentication ticket or this schema is somehow changed by the SiteMinder? If such call is needed, do I also need a SiteMinder authentication sequence? What prevents me from rewriting request headers (say, manually in Fiddler) before posting request to the SharePoint protected by SiteMinder to override its defence? Pity, but I do not have access to deployed SiteMinder + SharePoint, so need to investigate some question blindly. Thanks.

    Read the article

  • Why can't Doctrine retrieve my model data?

    - by scottm
    So, I'm trying to use Doctrine to retrieve some data. I have some basic code like this: $conn = Doctrine_Manager::connection(CONNECTION_STRING); $site = Doctrine_Core::getTable('Site')->find('00024'); echo $site->SiteName; However, this keeps throwing a SQL error that 'column siteid does not exist'. When I look at the exception the SQL query is this (you can see the error is that the inner_tbl alias for siteid is set to s__siteid, so querying inner_tabl.siteid is what's broken): SELECT TOP 1 [inner_tbl].[siteid] AS [s__siteid] FROM (SELECT TOP 1 [s].[siteid] AS [s__siteid], [s].[name] AS [s__name], [s].[address] AS [s__address], [s].[city] AS [s__city], [s].[zip] AS [s__zip], [s].[state] AS [s__state], [s].[region] AS [s__region], [s].[callprocessor] AS [s__callprocessor], [s].[active] AS [s__active], [s].[dateadded] AS [s__dateadded] FROM [Sites] [s] WHERE ([s].[siteid] = '00024') ) AS [inner_tbl] Why is the query being generated this way? Could it be the way the Yaml schema is laid out? Site: connection: 0 tableName: Sites columns: siteid: type: string(5) fixed: true unsigned: false primary: true autoincrement: false name: type: string(300) fixed: false unsigned: false notnull: true primary: false autoincrement: false address: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false city: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false zip: type: string(5) fixed: false unsigned: false notnull: false primary: false autoincrement: false state: type: string(2) fixed: true unsigned: false notnull: true primary: false autoincrement: false region: type: integer(4) fixed: false unsigned: false notnull: true default: (5) primary: false autoincrement: false callprocessor: type: integer(4) fixed: false unsigned: false notnull: true primary: false autoincrement: false active: type: integer(1) fixed: false unsigned: false notnull: true primary: false autoincrement: false dateadded: type: timestamp(16) fixed: false unsigned: false notnull: true default: (getdate()) primary: false autoincrement: false

    Read the article

  • Calling SubmitChanges on DataContext does not update database.

    - by drasto
    In C# ASP.NET MVC application I use Link to SQL to provide data for my application. I have got simple database schema like this: In my controller class I reference this data context called Model (as you can see on the right side of picture in properties) like this: private Model model = new Model(); I've got a table (List) of Series rendered on my page. It renders properly and I was able to add delete functionality to delete Series like this: public ActionResult Delete(int id) { model.Series.DeleteOnSubmit(model.Series.SingleOrDefault(s => s.ID == id)); model.SubmitChanges(); return RedirectToAction("Index"); } Where appropriate action link looks like this: <%: Html.ActionLink("Delete", "Delete", new { id=item.ID })%> Also create (implemented in similar way) works fine. However edit does not work. My edit looks like this: public ActionResult Edit(int id) { return View(model.Series.SingleOrDefault(s => s.ID == id)); } [HttpPost] public ActionResult Edit(Series series) { if (ModelState.IsValid) { UpdateModel(series); series.Title = series.Title + " some string to ensure title has changed"; model.SubmitChanges(); return RedirectToAction("Index"); } I have controlled that my database has a primary key set up correctly. I debugged my application and found out that everything works as expected until the line with model.SubmitChanges();. This command does not apply the changes of Title property(or any other) against the database. Please help.

    Read the article

  • Turning off hibernate logging console output

    - by Jared
    I'm using hibernate 3 and want to stop it from dumping all the startup messages to the console. I tried commenting out the stdout lines in log4j.properties but no luck. I've pasted my log file below. Also I'm using eclipse with the standard project structure and have a copy of log4j.properties in both the root of the project folder and the bin folder. ### direct log messages to stdout ### #log4j.appender.stdout=org.apache.log4j.ConsoleAppender #log4j.appender.stdout.Target=System.out #log4j.appender.stdout.layout=org.apache.log4j.PatternLayout #log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n ### direct messages to file hibernate.log ### log4j.appender.file=org.apache.log4j.FileAppender log4j.appender.file.File=hibernate.log log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n ### set log levels - for more verbose logging change 'info' to 'debug' ### log4j.rootLogger=warn, stdout #log4j.logger.org.hibernate=info log4j.logger.org.hibernate=debug ### log HQL query parser activity #log4j.logger.org.hibernate.hql.ast.AST=debug ### log just the SQL #log4j.logger.org.hibernate.SQL=debug ### log JDBC bind parameters ### log4j.logger.org.hibernate.type=info #log4j.logger.org.hibernate.type=debug ### log schema export/update ### log4j.logger.org.hibernate.tool.hbm2ddl=debug ### log HQL parse trees #log4j.logger.org.hibernate.hql=debug ### log cache activity ### #log4j.logger.org.hibernate.cache=debug ### log transaction activity #log4j.logger.org.hibernate.transaction=debug ### log JDBC resource acquisition #log4j.logger.org.hibernate.jdbc=debug ### enable the following line if you want to track down connection ### ### leakages when using DriverManagerConnectionProvider ### #log4j.logger.org.hibernate.connection.DriverManagerConnectionProvider=trac5

    Read the article

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • SQL Stored Queries - use result of query as boolean based on existence of records

    - by Christian Mann
    Just getting into SQL stored queries right now... anyway, here's my database schema (simplified for YOUR convenience): member ------ id INT PK board ------ id INT PK officer ------ id INT PK If you're into OOP, Officer Inherits Board Inherits Member. In other words, if someone is listed on the officer table, s/he is listed on the board table and the member table. I want to find out the highest privilege level someone has. So far my SP looks like this: DELIMITER // CREATE PROCEDURE GetAuthLevel(IN targetID MEDIUMINT) BEGIN IF SELECT `id` FROM `member` WHERE `id` = targetID; THEN IF SELECT `id` FROM `board` WHERE `id` = targetID; THEN IF SELECT `id` FROM `officer` WHERE `id` = targetID; THEN RETURN 3; /*officer*/ ELSE RETURN 2; /*board member*/ ELSE RETURN 1; /*general member*/ ELSE RETURN 0; /*not a member*/ END // DELIMITER ; The exact text of the error is #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT id FROM member WHERE id = targetID; THEN IF SEL' at line 4 I suspect the issue is in the arguments for the IF blocks. What I want to do is return true if the result-set is at least one -- i.e. the id was found in the table. Do any of you guys see anything to do here, or should I reconsider my database design into this:? person ------ id INT PK level SMALLINT

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • how to specify open id realm in openid4java 0.9.5

    - by Salvin Francis
    my url @ development : http://192.168.0.1:8888/com.company.MyEntryPoint/MyEntrypoint.html my url @ live env : http://www.example.com/com.company.MyEntryPoint/MyEntrypoint.html I need users to authenticate using open id, this is how i want my realm to be: *.company.MyEntryPoint I wrote a simple code to specify realm: AuthRequest authReq = manager.authenticate( discovered, returnToUrl, "*.company.MyEntryPoint" ); it does not work. Exception: org.openid4java.message.MessageException: 0x301: Realm verification failed (2) for: *.company.MyEntryPoint at org.openid4java.message.AuthRequest.validate(AuthRequest.java:354) at org.openid4java.message.AuthRequest.createAuthRequest(AuthRequest.java:101) at org.openid4java.consumer.ConsumerManager.authenticate(ConsumerManager.java:1073) Of all the combinations I tried, curiously, the following worked: AuthRequest authReq = manager.authenticate( discovered, returnToUrl, "http://localhost:8888/com.capgent.MyEntryPoint" ); This does not solves my issue but rather complicates it :) According to google and open id spec it should have worked complete code snippet: List discoveries = manager.discover(clientUrl); DiscoveryInformation discovered = manager.associate(discoveries); AuthRequest authReq = manager.authenticate(discovered, returnToUrl,"*.company.MyEntryPoint"); FetchRequest fetch = FetchRequest.createFetchRequest(); fetch.addAttribute("email", "http://schema.openid.net/contact/email", true); fetch.addAttribute("country", "http://axschema.org/contact/country/home", true); fetch.addAttribute("firstname", "http://axschema.org/namePerson/first", true); fetch.addAttribute("lastname", "http://axschema.org/namePerson/last", true); fetch.addAttribute("language", "http://axschema.org/pref/language", true); authReq.addExtension(fetch); String returnStr; if (!discovered.isVersion2()) { returnStr = authReq.getDestinationUrl(true); } else { returnStr = authReq.getDestinationUrl(false); } What am I doing wrong over here ?

    Read the article

  • calling a stored proc over a dblink

    - by neesh
    I am trying to call a stored procedure over a database link. The code looks something like this: declare symbol_cursor package_name.record_cursor; symbol_record package_name.record_name; begin symbol_cursor := package_name.function_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor; When I run this from the same DB instance and schema where package_name belongs to I am able to run it fine. However, when I run this over a database link, (with the required modification to the stored proc name, etc) I get an oracle error: ORA-24338: statement handle not executed. The modified version of this code over a dblink looks like this: declare symbol_cursor package_name.record_cursor@db_link_name; symbol_record package_name.record_name@db_link_name; begin symbol_cursor := package_name.function_name@db_link_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor;

    Read the article

  • Attributes of attributevalue element in SAML 2 Attribute Statement

    - by AJ
    I am building a web service that receives a SAML attribute query and responds with an attribute statement. I know I can return one or multiple values of a SAML attribute. I have some values that are dependent on the other attribute values. I need to show that relationship. Let us say, the query is for the Subject Dave and the return values are his company and job title. Dave can work at multiple companies with job title at each company. I have two options of sending this data back: Send this as a complextype by defining an attribute organization and return xml within that attribute. <saml:Attribute name="company"> <saml:AttributeValue> <company name="company1" jobtitle="CIO"/> <company name="company2" jobtitle="VP"/> </saml:AttributeValue> Try to send multiple values of attributes somehow sending a reference in attributevalue element. <saml:Attribute name="company"> <attributeValue>company1</attributeValue> <attributeValue>company2</attributeValue> </saml:Attribute> <saml:Attribute name="jobTitle> <attributeValue company="company1">CIO</attributeValue> <attributeValue company="company2">VP</attributeValue> </saml:Attribute> Which approach will you prefer? Why? I am biased towards second approach as it does not require client to know about any schema. It does require them to know about non-standard attribute company in the attribute value.

    Read the article

  • Full-text search on App Engine with Whoosh

    - by Martin
    I need to do full text searching with Google App Engine. I found the project Whoosh and it works really well, as long as I use the App Engine Development Environement... When I upload my application to App Engine, I am getting the following TraceBack. For my tests, I am using the example application provided in this project. Any idea of what I am doing wrong? <type 'exceptions.ImportError'>: cannot import name loads Traceback (most recent call last): File "/base/data/home/apps/myapp/1.334374478538362709/hello.py", line 6, in <module> from whoosh import store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/__init__.py", line 17, in <module> from whoosh.index import open_dir, create_in File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/index.py", line 31, in <module> from whoosh import fields, store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/store.py", line 27, in <module> from whoosh import tables File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/tables.py", line 43, in <module> from marshal import loads Here is the import I have in my Python file. # Whoosh ---------------------------------------------------------------------- sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'utils'))) from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT from whoosh.index import getdatastoreindex from whoosh.qparser import QueryParser, MultifieldParser Thank you in advance for your help!

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • KendoUI Mobile switch and datasource

    - by OnaBai
    I'm trying to have a list of items displayed using a listview, something like: <div data-role="view" data-model="my_model"> <ul data-role="listview" data-bind="source: ds" data-template="list-tmpl"></ul> </div> Where I have a view using a model called my_model and a listview where the source is bound to ds. My model is something like: var my_model = kendo.observable({ ds: new kendo.data.DataSource({ transport: { read: readData, update: updateData, create: updateData, remove: updateData }, pageSize: 10, schema: { model: { id: "id", fields: { id: { type: "number" }, name: { type: "string" }, active: { type: "boolean" } } } } }) }); Each item includes an id, a name (that is a string) and a boolean named active. The template used to render each element is: <script id="list-tmpl" type="text/kendo-tmpl"> <span>#= name # : #= active #</span> <input data-role="switch" data-bind="checked: active"/> </script> Where I display the name and (for debugging) the value of active. In addition, I render a switch bound to active. You should see something like: The problems observed are: If you click on a switch you will see that the value of active next to the name changes its value (as expected) but if then, you pick another switch, the value (neither next to name nor in the DataSource) is not updated (despite the value of the switch is correctly updated). The update handler in the DataSource is never invoked (not even for the first chosen switch and despite the DataSource for that first toggled switch is updated). You can check it in JSFiddle: http://jsfiddle.net/OnaBai/K7wEC/ How do I make that the DataSource gets updated and update handler invoked?

    Read the article

  • git stash blunder:

    - by Chirag Patel
    I did a git stash pop and ended up with merge conflicts. I removed the files from the file system and did a git checkout as shown below, but it thinks the files are still unmerged. I then tried replacing the files and doing a git checkout again and same result. I event tried forcing it with -f flag. Any help would be appreciated! chirag-patels-macbook-pro:haloror patelc75$ git status app/views/layouts/_choose_patient.html.erb: needs merge app/views/layouts/_links.html.erb: needs merge # On branch prod-temp # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: db/schema.rb # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # unmerged: app/views/layouts/_choose_patient.html.erb # unmerged: app/views/layouts/_links.html.erb chirag-patels-macbook-pro:haloror patelc75$ git checkout app/views/layouts/_choose_patient.html.erb error: path 'app/views/layouts/_choose_patient.html.erb' is unmerged chirag-patels-macbook-pro:haloror patelc75$ git checkout -f app/views/layouts/_choose_patient.html.erb warning: path 'app/views/layouts/_choose_patient.html.erb' is unmerged

    Read the article

  • implementing Ws-security within WCF proxy

    - by harrisonmeister
    Hi, I have imported an axis based wsdl into a VS 2008 project as a service reference. I need to be able to pass security details such as username/password and nonce values to call the axis based service. I have looked into doing it for wse, which i understand the world hates (no issues there) I have very little experience of WCF, but have worked how to physically call the endpoint now, thanks to SO, but have no idea how to set up the SoapHeaders as the schema below shows: <S:Envelope xmlns:S="http://www.w3.org/2001/12/soap-envelope" xmlns:ws="http://schemas.xmlsoap.org/ws/2002/04/secext"> <S:Header> <ws:Security> <ws:UsernameToken> <ws:Username>aarons</ws:Username> <ws:Password>snoraa</ws:Password> </ws:UsernameToken> </wsse:Security> ••• </S:Header> ••• </S:Envelope> Any help much appreciated Thanks, Mark

    Read the article

  • How do I create and use a junction table in Rails?

    - by Thierry Lam
    I have the following data: A post called Hello has categories greet Another post called Hola has categories greet, international My schema is: create_table "posts", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "categories", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "posts_categories", :force => true do |t| t.integer "post_id" t.integer "category_id" t.datetime "created_at" t.datetime "updated_at" end After reading the Rails guide, the most suitable relationship for the above seems to be: class Post < ActiveRecord::Base has_and_belongs_to_many :categories end class Category < ActiveRecord::Base has_and_belongs_to_many :posts end My junction table also seems to have a primary key. I think I need to get rid of it. What's the initial migration command to generate a junction table in Rails? What's the best course of action, should I drop posts_categories and re-create it or just drop the primary key column? Does the junction table have a corresponding model? I have used scaffold to generate the junction table code, should I get rid of the extra code? Assuming all the above has been fixed and is working properly, how do I query all posts and display them along with their named categories in the view. For example: Post #1 - hello, categories: greet Post #2 - hola, categories: greet, international

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >