Search Results

Search found 3163 results on 127 pages for 'schema'.

Page 105/127 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • calling a stored proc over a dblink

    - by neesh
    I am trying to call a stored procedure over a database link. The code looks something like this: declare symbol_cursor package_name.record_cursor; symbol_record package_name.record_name; begin symbol_cursor := package_name.function_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor; When I run this from the same DB instance and schema where package_name belongs to I am able to run it fine. However, when I run this over a database link, (with the required modification to the stored proc name, etc) I get an oracle error: ORA-24338: statement handle not executed. The modified version of this code over a dblink looks like this: declare symbol_cursor package_name.record_cursor@db_link_name; symbol_record package_name.record_name@db_link_name; begin symbol_cursor := package_name.function_name@db_link_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor;

    Read the article

  • how to specify open id realm in openid4java 0.9.5

    - by Salvin Francis
    my url @ development : http://192.168.0.1:8888/com.company.MyEntryPoint/MyEntrypoint.html my url @ live env : http://www.example.com/com.company.MyEntryPoint/MyEntrypoint.html I need users to authenticate using open id, this is how i want my realm to be: *.company.MyEntryPoint I wrote a simple code to specify realm: AuthRequest authReq = manager.authenticate( discovered, returnToUrl, "*.company.MyEntryPoint" ); it does not work. Exception: org.openid4java.message.MessageException: 0x301: Realm verification failed (2) for: *.company.MyEntryPoint at org.openid4java.message.AuthRequest.validate(AuthRequest.java:354) at org.openid4java.message.AuthRequest.createAuthRequest(AuthRequest.java:101) at org.openid4java.consumer.ConsumerManager.authenticate(ConsumerManager.java:1073) Of all the combinations I tried, curiously, the following worked: AuthRequest authReq = manager.authenticate( discovered, returnToUrl, "http://localhost:8888/com.capgent.MyEntryPoint" ); This does not solves my issue but rather complicates it :) According to google and open id spec it should have worked complete code snippet: List discoveries = manager.discover(clientUrl); DiscoveryInformation discovered = manager.associate(discoveries); AuthRequest authReq = manager.authenticate(discovered, returnToUrl,"*.company.MyEntryPoint"); FetchRequest fetch = FetchRequest.createFetchRequest(); fetch.addAttribute("email", "http://schema.openid.net/contact/email", true); fetch.addAttribute("country", "http://axschema.org/contact/country/home", true); fetch.addAttribute("firstname", "http://axschema.org/namePerson/first", true); fetch.addAttribute("lastname", "http://axschema.org/namePerson/last", true); fetch.addAttribute("language", "http://axschema.org/pref/language", true); authReq.addExtension(fetch); String returnStr; if (!discovered.isVersion2()) { returnStr = authReq.getDestinationUrl(true); } else { returnStr = authReq.getDestinationUrl(false); } What am I doing wrong over here ?

    Read the article

  • Attributes of attributevalue element in SAML 2 Attribute Statement

    - by AJ
    I am building a web service that receives a SAML attribute query and responds with an attribute statement. I know I can return one or multiple values of a SAML attribute. I have some values that are dependent on the other attribute values. I need to show that relationship. Let us say, the query is for the Subject Dave and the return values are his company and job title. Dave can work at multiple companies with job title at each company. I have two options of sending this data back: Send this as a complextype by defining an attribute organization and return xml within that attribute. <saml:Attribute name="company"> <saml:AttributeValue> <company name="company1" jobtitle="CIO"/> <company name="company2" jobtitle="VP"/> </saml:AttributeValue> Try to send multiple values of attributes somehow sending a reference in attributevalue element. <saml:Attribute name="company"> <attributeValue>company1</attributeValue> <attributeValue>company2</attributeValue> </saml:Attribute> <saml:Attribute name="jobTitle> <attributeValue company="company1">CIO</attributeValue> <attributeValue company="company2">VP</attributeValue> </saml:Attribute> Which approach will you prefer? Why? I am biased towards second approach as it does not require client to know about any schema. It does require them to know about non-standard attribute company in the attribute value.

    Read the article

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • Problem using NSXMLParser with NOAA data on iPhone

    - by Amagrammer
    Can anyone help me see why NSXMLParser is not causing these methods parser:didStartElement:namespaceURI:qualifiedName:attributes: parser:didEndElement:namespaceURI:qualifiedName:attributes: to fire for the part of the following data: <?xml version="1.0" encoding="ISO-8859-1"?><SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:NDFDgenResponse xmlns:ns1=""><dwmlOut xsi:type="xsd:string"><?xml version="1.0"?> <dwml version="1.0" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://www.nws.noaa.gov/forecasts/xml/DWMLgen/schema/DWML.xsd"> (body excluded) </dwml> </dwmlOut></ns1:NDFDgenResponse></SOAP-ENV:Body></SOAP-ENV:Envelope> I'm not an XML expert, but to me, the part looks like just a regular element, to be parsed just like the parts before it. I do get two parser:parseErrorOccurred: errors, #200 and #201, but they occur during the parsing of the <SOAP-ENV:Body> element, not the element, so I'm not sure if they are relevant. Thanks for any help you can give me.

    Read the article

  • git stash blunder:

    - by Chirag Patel
    I did a git stash pop and ended up with merge conflicts. I removed the files from the file system and did a git checkout as shown below, but it thinks the files are still unmerged. I then tried replacing the files and doing a git checkout again and same result. I event tried forcing it with -f flag. Any help would be appreciated! chirag-patels-macbook-pro:haloror patelc75$ git status app/views/layouts/_choose_patient.html.erb: needs merge app/views/layouts/_links.html.erb: needs merge # On branch prod-temp # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: db/schema.rb # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # unmerged: app/views/layouts/_choose_patient.html.erb # unmerged: app/views/layouts/_links.html.erb chirag-patels-macbook-pro:haloror patelc75$ git checkout app/views/layouts/_choose_patient.html.erb error: path 'app/views/layouts/_choose_patient.html.erb' is unmerged chirag-patels-macbook-pro:haloror patelc75$ git checkout -f app/views/layouts/_choose_patient.html.erb warning: path 'app/views/layouts/_choose_patient.html.erb' is unmerged

    Read the article

  • KendoUI Mobile switch and datasource

    - by OnaBai
    I'm trying to have a list of items displayed using a listview, something like: <div data-role="view" data-model="my_model"> <ul data-role="listview" data-bind="source: ds" data-template="list-tmpl"></ul> </div> Where I have a view using a model called my_model and a listview where the source is bound to ds. My model is something like: var my_model = kendo.observable({ ds: new kendo.data.DataSource({ transport: { read: readData, update: updateData, create: updateData, remove: updateData }, pageSize: 10, schema: { model: { id: "id", fields: { id: { type: "number" }, name: { type: "string" }, active: { type: "boolean" } } } } }) }); Each item includes an id, a name (that is a string) and a boolean named active. The template used to render each element is: <script id="list-tmpl" type="text/kendo-tmpl"> <span>#= name # : #= active #</span> <input data-role="switch" data-bind="checked: active"/> </script> Where I display the name and (for debugging) the value of active. In addition, I render a switch bound to active. You should see something like: The problems observed are: If you click on a switch you will see that the value of active next to the name changes its value (as expected) but if then, you pick another switch, the value (neither next to name nor in the DataSource) is not updated (despite the value of the switch is correctly updated). The update handler in the DataSource is never invoked (not even for the first chosen switch and despite the DataSource for that first toggled switch is updated). You can check it in JSFiddle: http://jsfiddle.net/OnaBai/K7wEC/ How do I make that the DataSource gets updated and update handler invoked?

    Read the article

  • Full-text search on App Engine with Whoosh

    - by Martin
    I need to do full text searching with Google App Engine. I found the project Whoosh and it works really well, as long as I use the App Engine Development Environement... When I upload my application to App Engine, I am getting the following TraceBack. For my tests, I am using the example application provided in this project. Any idea of what I am doing wrong? <type 'exceptions.ImportError'>: cannot import name loads Traceback (most recent call last): File "/base/data/home/apps/myapp/1.334374478538362709/hello.py", line 6, in <module> from whoosh import store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/__init__.py", line 17, in <module> from whoosh.index import open_dir, create_in File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/index.py", line 31, in <module> from whoosh import fields, store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/store.py", line 27, in <module> from whoosh import tables File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/tables.py", line 43, in <module> from marshal import loads Here is the import I have in my Python file. # Whoosh ---------------------------------------------------------------------- sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'utils'))) from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT from whoosh.index import getdatastoreindex from whoosh.qparser import QueryParser, MultifieldParser Thank you in advance for your help!

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • Mapping Cassandra Super Columns

    - by Laubstein
    Hello dudes, I guess everybody that played with Cassandra already read this article. I trying to create my schema on CassandraCli, but I am having a lot of problems, can someone guide me to the right way? I am trying to create a similar structure like the Comments column family from the article. In CassandraCli terminal I type: create column family posts with column_type = ‘Super’ and comparator = ‘AsciiType’ and subcomparator = TimeUUIDType; It works fine, there is no doc telling me that if I add a column_metadata attribute those will be for the super columns cause my column family is of type super, i can’t find if it is true so: create column family posts with column_type = ‘Super’ and comparator = ‘AsciiType’ and subcomparator = ‘TimeUUIDType’ and column_metadata = [{column_name:'body'}]; I am trying to create the same as the comment column family of the article, but when i try to populate set posts['post1'][timeuuid()][body] = ‘Hello I am Goku!’; i got: Invalid UUID string: body I guess because i chose the subcomparator be of type timeuuid and the body is a string, it should be a timeuuid, so HOW my columns inside the super column which is the type timeuuid could holds columns with string type names as the comments of the article are created? Thanks

    Read the article

  • implementing Ws-security within WCF proxy

    - by harrisonmeister
    Hi, I have imported an axis based wsdl into a VS 2008 project as a service reference. I need to be able to pass security details such as username/password and nonce values to call the axis based service. I have looked into doing it for wse, which i understand the world hates (no issues there) I have very little experience of WCF, but have worked how to physically call the endpoint now, thanks to SO, but have no idea how to set up the SoapHeaders as the schema below shows: <S:Envelope xmlns:S="http://www.w3.org/2001/12/soap-envelope" xmlns:ws="http://schemas.xmlsoap.org/ws/2002/04/secext"> <S:Header> <ws:Security> <ws:UsernameToken> <ws:Username>aarons</ws:Username> <ws:Password>snoraa</ws:Password> </ws:UsernameToken> </wsse:Security> ••• </S:Header> ••• </S:Envelope> Any help much appreciated Thanks, Mark

    Read the article

  • Need some advice on Core Data modeling strategy

    - by Andy
    I'm working on an iPhone app and need a little advice on modeling the Core Data schema. My idea is a utility that allows the user to speed-dial their contacts using user-created rules based on the time of day. In other words, I would tell the app that my wife is commuting from 6am to 7am, at work from 7am to 4pm, commuting from 4pm to 5pm, and home from 5pm to 6am, Monday through Friday. Then, when I tap her name in my app, it would select the number to dial based on the current day and time. I have the user interface nearly complete (thanks in no small part to help I've received here), but now I've got some questions regarding the persistent store. The user can select start- and stop-times in 5-minute increments. This means there are 2,016 possible "time slots" in week (7 days * 24 hours * 12 5-minute intervals per hour). I see a few options for setting this up. Option #1: One array of time slots, with 2,016 entries. Each entry would be a dictionary containing a contact identifier and an associated phone number to dial. I think this means I'd need a "Contact" entity to store the contact information, and a "TimeSlot" entity for each of the 2,016 possible time slots. Option #2: Each Contact has its own array of time slots, each with 2,016 entries. Each array entry would simply be a string indicating which phone number to dial. Option #3: Each Contact has a dictionary of time slots. An entry would only be added to the dictionary for time slots with an active rule. If a search for, say, time slot 1,299 (Friday 12:15pm) didn't find a key @"1299" in the dictionary, then a default number would be dialed instead. I'm not sure any of these is the "right" way or the "best" way. I'm not even sure I need to use Core Data to manage it; maybe just saving arrays would be simpler. Any input you can offer would be appreciated.

    Read the article

  • How do I create and use a junction table in Rails?

    - by Thierry Lam
    I have the following data: A post called Hello has categories greet Another post called Hola has categories greet, international My schema is: create_table "posts", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "categories", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "posts_categories", :force => true do |t| t.integer "post_id" t.integer "category_id" t.datetime "created_at" t.datetime "updated_at" end After reading the Rails guide, the most suitable relationship for the above seems to be: class Post < ActiveRecord::Base has_and_belongs_to_many :categories end class Category < ActiveRecord::Base has_and_belongs_to_many :posts end My junction table also seems to have a primary key. I think I need to get rid of it. What's the initial migration command to generate a junction table in Rails? What's the best course of action, should I drop posts_categories and re-create it or just drop the primary key column? Does the junction table have a corresponding model? I have used scaffold to generate the junction table code, should I get rid of the extra code? Assuming all the above has been fixed and is working properly, how do I query all posts and display them along with their named categories in the view. For example: Post #1 - hello, categories: greet Post #2 - hola, categories: greet, international

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • using a JOIN in an UPDATE in SQL

    - by SDLFunTimes
    Hi, I'm having trouble formulating a legal statement to double the statuses of the suppliers (s) who have shipped (sp) more than 500 units. I've been trying: update s set s.status = s.status * 2 from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500; however I'm getting this error from Mysql: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'from s join sp on (sp.sno = s.sno) group by sno having sum(qty) > 500' at line 1 Does anyone have any ideas about what is wrong with this query? Here's my schema: create table s ( sno char(5) not null, sname char(20) not null, status smallint, city char(15), primary key (sno) ); create table p ( pno char(6) not null, pname char(20) not null, color char(6), weight smallint, city char(15), primary key (pno) ); create table sp ( sno char(5) not null, pno char(6) not null, qty integer not null, primary key (sno, pno) );

    Read the article

  • Is there a problem with this MySQL Query?

    - by ThinkingInBits
    http://pastie.org/954073 The echos at the top all display valid data that fit the database schema. There are no connection errors Any ideas? Thanks in advance! Here is the echo'ed query INSERT INTO equipment (cat_id, name, year, manufacturer, model, price, location, condition, stock_num, information, description, created, modified) VALUES (1, 'r', 1, 'sdf', 'sdf', '2', 'd', 'd', '3', 'asdfasdfdf', 'df', '10 May 10', '10 May 10') MySQL is giving: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'condition, stock_num, information, description, created, modified) VALUES (1, 'r' at line 1 id int(11) unsigned NO PRI NULL auto_increment Edit Delete cat_id int(11) unsigned NO NULL Edit Delete prod_name varchar(255) YES NULL Edit Delete prod_year varchar(10) YES NULL Edit Delete manufacturer varchar(255) YES NULL Edit Delete model varchar(255) YES NULL Edit Delete price varchar(10) YES NULL Edit Delete location varchar(255) YES NULL Edit Delete condition varchar(25) YES NULL Edit Delete stock_num varchar(128) YES NULL Edit Delete information text YES NULL Edit Delete description text YES NULL Edit Delete created varchar(20) YES NULL Edit Delete modified varchar(20) YES NULL Query: INSERT INTO equipment (cat_id, prod_name, prod_year, manufacturer, model, price, location, condition, stock_num, information, description, created, modified) VALUES (1, 'asdf', '234', 'adf', 'asdf', '34', 'asdf', 'asdf', '234', 'asdf', 'asdf', '10 May 10', '10 May 10')

    Read the article

  • Calling webservice via server causes java.net.MalformedURLException: no protocol

    - by Thomas
    I am writing a web-service, which parses an xml file. In the client, I read the whole content of the xml into a String then I give it to the web-service. If I run my web-service with main as a Java-Application (for tests) there is no problem, no error messages. However when I try to call it via the server, I get the following error: java.net.MalformedURLException: no protocol I use the same xml file, the same code (without main), and I just cannot figure out, what the cause of the error can be. here is my code: DOMParser parser=new DOMParser(); try { parser.setFeature("http://xml.org/sax/features/validation", true); parser.setFeature("http://apache.org/xml/features/validation/schema",true); parser.setFeature("http://apache.org/xml/features/validation/dynamic",true); parser.setErrorHandler(new myErrorHandler()); parser.parse(new InputSource(new StringReader(xmlFile))); document=parser.getDocument(); xmlFile is constructed in the client so: String myFile ="C:/test.xml"; File file=new File(myFile); String myString=""; FileInputStream fis=new FileInputStream(file); BufferedInputStream bis=new BufferedInputStream(fis); DataInputStream dis=new DataInputStream(bis); while (dis.available()!=0) { myString=myString+dis.readLine(); } fis.close(); bis.close(); dis.close(); Any suggestions will be appreciated!

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • Building a News-feed that comprises posts "created by user's connections" && "on the topics user is following"

    - by aklin81
    I am working on a project of Questions & Answers website that allows a user to follow questions on certain topics from his network. A user's news-feed wall comprises of only those questions that have been posted by his connections and tagged on the topics that he is following(his expertise topics). I am confused what database's datamodel would be most fitting for such an application. The project needs to consider the future provisions for scalability and high performance issues. I have been looking at Cassandra and MySQL solutions as of now. After my study of Cassandra I realized that Simple news-feed design that shows all the posts from network would be easy to design using Cassandra by executing fast writes to all followers of a user about the post from user. But for my kind of application where there is an additional filter of 'followed topics', (ie, the user receives posts "created by his network" && "on topics user is following"), I could not convince myself with a good schema design in Cassandra. I hope if I missed something because of my short understanding of cassandra, perhaps, can you please help me out with your suggestions of how this news-feed could be implemented in Cassandra ? Looking for a great project with Cassandra ! Edit: There are going to be maximum 5 tags allowed for tagging the question (ie, max 5 topics can be tagged on a question).

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >