Search Results

Search found 14448 results on 578 pages for 'schema org'.

Page 397/578 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Sharepoint: Integrity of lookup fields after a list import

    - by driAn
    Hi there I got a question about the behavior of lookup fields when importing data. I wonder how the lookup fields behave when the list they point to is being replaced/imported. To explain the issue, I will provide a quick example below: As example, assume we have these two sharepoint lists: Product Types ------------- + Type Name + Code Nr + etc Products -------- + Product Name + Product Type (Lookup field to list "Product Types") + etc In my scenario, the Products List contains production data on the production Sharepoint platform. It is filled with data by the business users. However the Product Types list contains rather static data and is maintained by the developer. Now after a development cycle, the developer wants to deploy his new webparts and his new data (product types list). The developer performs the following procedure: On the dev machine: Export "product type" list using stsadm On the production machine: Delete all items in the "product type" list On the production machine: Import the "product type" list using stsadm This means we basically replace the "product type" list on the production server while keeping the "product" list as it is. Now the question: Is this safe? Will the lookup references break under certain circumstances? Any downside of this import/export procedure? What happens if someone accesses a "product" during the import? Will the (now invalid) reference clear its own content (become a null value). What happens if the schema of the "product type" list changes (new column)? Will this cause any troubles? Thanks for all feedback and suggestions!

    Read the article

  • Can't get MySQL source query to work using Python mysqldb module

    - by Chris
    I have the following lines of code: sql = "source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql" cursor.execute (sql) When I execute my program, I get the following error: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'source C:\My Dropbox\workspace\projects\hosted_inv\create_site_db.sql' at line 1 Now I can copy and past the following into mysql as a query: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql And it works perfect. When I check the query log for the query executed by my script, it shows that my query was the following: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql However, when I manually paste it in and execute, the entire create_site_db.sql gets expanded in the query log and it shows all the sql queries in that file. Am I missing something here on how mysqldb does queries? Am I running into a limitation. My goal is to run a sql script to create the schema structure, but I don't want to have to call mysql in a shell process to source the sql file. Any thoughts? Thanks!

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • Maven doesn't compile target/hibernate3/generated-sources

    - by mmm
    Can someone tell me how to configure maven for it also to compile sources from the target/hibernate3/generated-sources directory? I have already read this and other posts but they don't seem to solve my problem (which indeed seems trivial). I have used the bottom-up approach hibernate configuration for cfg.xml, hbm.xml and POJO generation (i.e. auto-generated the complete hibernate configuration out of an existing database schema). I'm also only using standard maven and hibernate3-plugin directory layouts. Yet, when executing mvn compile in the command-line while my sources are in the src/main/java and the generated sources in /target/hibernate3/generated-sources only the ones from src/main/java get compiled and copied into target/classes. I wouldn't like to generate sources into src/main/java as I'd like mvn clean to clean them. I'd like to solve the problem using command-line, plugins and pom.xml only. Is there a way to configure maven-compiler-plugin to do so? Or is there another way? Regards and thanks for any help.

    Read the article

  • How to force ADO.Net to use only the System.String DataType in the readers TableSchema.

    - by Keith Sirmons
    Howdy, I am using an OleDbConnection to query an Excel 2007 Spreadsheet. I want force the OleDbDataReader to use only string as the column datatype. The system is looking at the first 8 rows of data and inferring the data type to be Double. The problem is that on row 9 I have a string in that column and the OleDbDataReader is returning a Null value since it could not be cast to a Double. I have used these connection strings: Provider=Microsoft.ACE.OLEDB.12.0;Data Source="ExcelFile.xlsx";Persist Security Info=False;Extended Properties="Excel 12.0;IMEX=1;HDR=No" Provider=Microsoft.Jet.OLEDB.4.0;Data Source="ExcelFile.xlsx";Persist Security Info=False;Extended Properties="Excel 8.0;HDR=No;IMEX=1" Looking at the reader.GetSchemaTable().Rows[7].ItemArray[5], it's dataType is Double. Row 7 in this schema correlates with the specific column in Excel I am having issues with. ItemArray[5] is its DataType column Is it possible to create a custom TableSchema for the reader so when accessing the ExcelFiles, I can treat all cells as text instead of letting the system attempt to infer the datatype? I found some good info at this page: Tips for reading Excel spreadsheets using ADO.NET The main quirk about the ADO.NET interface is how datatypes are handled. (You'll notice I've been carefully avoiding the question of which datatypes are returned when reading the spreadsheet.) Are you ready for this? ADO.NET scans the first 8 rows of data, and based on that guesses the datatype for each column. Then it attempts to coerce all data from that column to that datatype, returning NULL whenever the coercion fails! Thank you, Keith Here is a reduced version of my code: using (OleDbConnection connection = new OleDbConnection(BuildConnectionString(dataMapper).ToString())) { connection.Open(); using (OleDbCommand cmd = new OleDbCommand()) { cmd.Connection = connection; cmd.CommandText = SELECT * from [Sheet1$]; using (OleDbDataReader reader = cmd.ExecuteReader()) { using (DataTable dataTable = new DataTable("TestTable")) { dataTable.Load(reader); base.SourceDataSet.Tables.Add(dataTable); } } } }

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • C#, create virtual directory on remote system

    - by sankar
    The following code create only virtual directory on local system , but i need to create on remote sytem ..help me.. Thanks, Sankar DirectoryEntry iisServer; string VirDirSchemaName = "IIsWebVirtualDir"; public DirectoryEntry Connect() { try { if (txtPath.Text.ToLower().Trim() == "localhost") iisServer = new DirectoryEntry("IIS://" + txtPath.Text.Trim() + "/W3SVC/1/Root"); else iisServer = new DirectoryEntry("IIS://" + txtPath.Text + "/Schema/AppIsolated", "XYZ", "xyz"); iisServer.Dispose(); } catch (Exception e) { throw new Exception("Could not connect to: " + txtPath.Text.Trim(), e); } return iisServer; } public void CreateVirtualDirectory(DirectoryEntry iisServer) { DirectoryEntry folderRoot = new DirectoryEntry("IIS://" + txtPath.Text + "/W3SVC/1/Root", "XYZ", "xyz"); folderRoot.RefreshCache(); folderRoot.CommitChanges(); try { DirectoryEntry newVirDir = folderRoot.Children.Add(txtName.Text, VirDirSchemaName); newVirDir.CommitChanges(); newVirDir.Properties["AccessRead"].Add(true); newVirDir.Properties["Path"].Add(@"\\abc\abc"); newVirDir.Invoke("AppCreate", true); newVirDir.CommitChanges(); folderRoot.CommitChanges(); newVirDir.Close(); folderRoot.CommitChanges(); } catch (Exception e) { throw new Exception("Error! Virtual Directory Not Created", e); } } protected void btnCreate_Click(object sender, EventArgs e) { try { CreateVirtualDirectory(Connect()); } catch (Exception ex) { Response.Write(ex.Message); } } protected void Page_Load(object sender, EventArgs e) { }

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • Flex / Flash builder : no returning data using database

    - by Tristan
    Hello, i'm following some flex tutorials everything's working as wanted expected for one thing : When i use my function getServerByBrand($brand) there is no returned data into my datagrid and i don't know why because it uses the same schema as getAllserver() which is working . I don't know whether it's cause by the function itselft or the configuration in flash builder : protected function RechercheGSP_clickHandler(event:MouseEvent):void { getServerByBrandResult.token = dbClass.getServerByBrand(SearchInput.text); } Here's what i've got in Data/Services : getServerByBrand(brand : string) : Object And finally the function : public function getServerByBrand($brand) { $stmt = mysqli_prepare($this->connection, "SELECT DISTINCT * FROM $this->tablename where GSP_nom=? "); $this->throwExceptionOnError(); mysqli_stmt_execute($stmt); $this->throwExceptionOnError(); $rows = array(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); while (mysqli_stmt_fetch($stmt)) { $row->timestamp = new DateTime($row->timestamp); $rows[] = $row; $row = new stdClass(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->GSP_nom, $row->IPserv, $row->port, $row->tickrate, $row->membre, $row->nomPays, $row->finContrat, $row->actif, $row->timestamp, $row->type, $row->jeux, $row->slot, $row->ipClient, $row->essai, $row->reussite, $row->echec, $row->valide, $row->email); } mysqli_stmt_free_result($stmt); mysqli_close($this->connection); return $rows; } I tested the settings with configure return type and it tells me : "the operation returned a primitive "object". test settings : Parameters (brand) / Input type (String) / Value (woop) To conclude, there is no returned object at all. Do you see the problem ? Thanks

    Read the article

  • SharePoint 2007 and SiteMinder

    - by pborovik
    Here is a question regarding some details how SiteMinder secures access to the SharePoint 2007. I've read a bunch of materials regarding this and have some picture for SharePoint 2010 FBA claims-based + SiteMinder security (can be wrong here, of course): SiteMinder is registered as a trusted identity provider for the SharePoint; It means (to my mind) that SharePoint has no need to go into all those user directories like AD, RDBMS or whatever to create a record for user being granted access to SharePoint - instead it consumes a claims-based id supplied by SiteMinder SiteMinder checks all requests to SharePoint resources and starts login sequence via SiteMinder if does not find required headers in the request (SMSESSION, etc.) SiteMinder creates a GenericIdentity with the user login name if headers are OK, so SharePoint recognizes the user as authenticated But in the case of SharePoint 2007 with FBA + SiteMinder, I cannot find an answer for questions like: Does SharePoint need to go to all those user directories like AD to know something about users (as SiteMinder is not in charge of providing user info like claims-based ids)? So, SharePoint admin should configure SharePoint FBA to talk to these sources? Let's say I'm talking to a Web Service of SharePoint protected by SiteMinder. Shall I make a Authentication.asmx-Login call to create a authentication ticket or this schema is somehow changed by the SiteMinder? If such call is needed, do I also need a SiteMinder authentication sequence? What prevents me from rewriting request headers (say, manually in Fiddler) before posting request to the SharePoint protected by SiteMinder to override its defence? Pity, but I do not have access to deployed SiteMinder + SharePoint, so need to investigate some question blindly. Thanks.

    Read the article

  • VSDBCMD deployment for additions to third party databases

    - by Sam
    We have some custom objects (stored procedures etc) in an SQL Server 2005 database belonging to an ERP system. The custom objects are in different schemas to the ERP objects. We're using Database Edition .dbproj projects and vsdbcmd deployment for all our custom application databases and would like to similarly manage our custom objects in the ERP database. It's not clear how this can be done without either: Importing all ERP objects (~4000 tables) into the .dbproj and manually keeping them in sync with ERP development. Visual Studio fell over the only time I tried importing these, so I've no idea whether it can actually handle a project of this size. Somehow excluding the ERP schemas (there are two) from the diff process to ensure they don't get dropped by vsdbcmd. I haven't found any documentation which suggests this is possible. I'm aware of the IgnoreDefaultSchema setting, but there are two schemas I need to ignore and I'm not comfortable with the 'default schema' approach - deployment by different users could be disasterous. Has anyone managed to successfully use .dbproj & vsdbcmd for custom additions to a third party database? If not, how do you manage SQL source control & deployment?

    Read the article

  • Calling SubmitChanges on DataContext does not update database.

    - by drasto
    In C# ASP.NET MVC application I use Link to SQL to provide data for my application. I have got simple database schema like this: In my controller class I reference this data context called Model (as you can see on the right side of picture in properties) like this: private Model model = new Model(); I've got a table (List) of Series rendered on my page. It renders properly and I was able to add delete functionality to delete Series like this: public ActionResult Delete(int id) { model.Series.DeleteOnSubmit(model.Series.SingleOrDefault(s => s.ID == id)); model.SubmitChanges(); return RedirectToAction("Index"); } Where appropriate action link looks like this: <%: Html.ActionLink("Delete", "Delete", new { id=item.ID })%> Also create (implemented in similar way) works fine. However edit does not work. My edit looks like this: public ActionResult Edit(int id) { return View(model.Series.SingleOrDefault(s => s.ID == id)); } [HttpPost] public ActionResult Edit(Series series) { if (ModelState.IsValid) { UpdateModel(series); series.Title = series.Title + " some string to ensure title has changed"; model.SubmitChanges(); return RedirectToAction("Index"); } I have controlled that my database has a primary key set up correctly. I debugged my application and found out that everything works as expected until the line with model.SubmitChanges();. This command does not apply the changes of Title property(or any other) against the database. Please help.

    Read the article

  • Why can't Doctrine retrieve my model data?

    - by scottm
    So, I'm trying to use Doctrine to retrieve some data. I have some basic code like this: $conn = Doctrine_Manager::connection(CONNECTION_STRING); $site = Doctrine_Core::getTable('Site')->find('00024'); echo $site->SiteName; However, this keeps throwing a SQL error that 'column siteid does not exist'. When I look at the exception the SQL query is this (you can see the error is that the inner_tbl alias for siteid is set to s__siteid, so querying inner_tabl.siteid is what's broken): SELECT TOP 1 [inner_tbl].[siteid] AS [s__siteid] FROM (SELECT TOP 1 [s].[siteid] AS [s__siteid], [s].[name] AS [s__name], [s].[address] AS [s__address], [s].[city] AS [s__city], [s].[zip] AS [s__zip], [s].[state] AS [s__state], [s].[region] AS [s__region], [s].[callprocessor] AS [s__callprocessor], [s].[active] AS [s__active], [s].[dateadded] AS [s__dateadded] FROM [Sites] [s] WHERE ([s].[siteid] = '00024') ) AS [inner_tbl] Why is the query being generated this way? Could it be the way the Yaml schema is laid out? Site: connection: 0 tableName: Sites columns: siteid: type: string(5) fixed: true unsigned: false primary: true autoincrement: false name: type: string(300) fixed: false unsigned: false notnull: true primary: false autoincrement: false address: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false city: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false zip: type: string(5) fixed: false unsigned: false notnull: false primary: false autoincrement: false state: type: string(2) fixed: true unsigned: false notnull: true primary: false autoincrement: false region: type: integer(4) fixed: false unsigned: false notnull: true default: (5) primary: false autoincrement: false callprocessor: type: integer(4) fixed: false unsigned: false notnull: true primary: false autoincrement: false active: type: integer(1) fixed: false unsigned: false notnull: true primary: false autoincrement: false dateadded: type: timestamp(16) fixed: false unsigned: false notnull: true default: (getdate()) primary: false autoincrement: false

    Read the article

  • How do I cast <T> to varbinary and be still be able to perform a CONVERT on the sql side? Implicatio

    - by Biff MaGriff
    Hello, I'm writing this application that will allow a user to define custom quizzes and then allow another user to respond to the questions. Each question of the quiz has a corresponding datatype. All the responses to all the questions are stored vertically in my [Response] table. I currently use 2 fields to store the response. //Response schema ResponseID int QuizPersonID int QuestionID int ChoiceID int //maps to Choice table, used for drop down lists ChoiceValue varbinary(MAX) //used to store a user entered value I'm using .net 3.5 C# SQL Server 2008. I'm thinking that I would want to store different datatypes in the same field and then in my SQL report proc I would CONVERT to the proper datatype. I'm thinking this is ideal because I only have to check one field. I'm also thinking it might be more trouble than it is worth. I think my other options are to; store the data as strings in the db (yuck), or to have a column for each datatype I might use. So what I would like to know is, how would I format my datatypes in C# so that they can be converted properly in SQL? What is the performance hit for converting in SQL? Should I just make a whole wack of columns for each datatype?

    Read the article

  • SQL Stored Queries - use result of query as boolean based on existence of records

    - by Christian Mann
    Just getting into SQL stored queries right now... anyway, here's my database schema (simplified for YOUR convenience): member ------ id INT PK board ------ id INT PK officer ------ id INT PK If you're into OOP, Officer Inherits Board Inherits Member. In other words, if someone is listed on the officer table, s/he is listed on the board table and the member table. I want to find out the highest privilege level someone has. So far my SP looks like this: DELIMITER // CREATE PROCEDURE GetAuthLevel(IN targetID MEDIUMINT) BEGIN IF SELECT `id` FROM `member` WHERE `id` = targetID; THEN IF SELECT `id` FROM `board` WHERE `id` = targetID; THEN IF SELECT `id` FROM `officer` WHERE `id` = targetID; THEN RETURN 3; /*officer*/ ELSE RETURN 2; /*board member*/ ELSE RETURN 1; /*general member*/ ELSE RETURN 0; /*not a member*/ END // DELIMITER ; The exact text of the error is #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT id FROM member WHERE id = targetID; THEN IF SEL' at line 4 I suspect the issue is in the arguments for the IF blocks. What I want to do is return true if the result-set is at least one -- i.e. the id was found in the table. Do any of you guys see anything to do here, or should I reconsider my database design into this:? person ------ id INT PK level SMALLINT

    Read the article

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • calling a stored proc over a dblink

    - by neesh
    I am trying to call a stored procedure over a database link. The code looks something like this: declare symbol_cursor package_name.record_cursor; symbol_record package_name.record_name; begin symbol_cursor := package_name.function_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor; When I run this from the same DB instance and schema where package_name belongs to I am able to run it fine. However, when I run this over a database link, (with the required modification to the stored proc name, etc) I get an oracle error: ORA-24338: statement handle not executed. The modified version of this code over a dblink looks like this: declare symbol_cursor package_name.record_cursor@db_link_name; symbol_record package_name.record_name@db_link_name; begin symbol_cursor := package_name.function_name@db_link_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor;

    Read the article

  • Attributes of attributevalue element in SAML 2 Attribute Statement

    - by AJ
    I am building a web service that receives a SAML attribute query and responds with an attribute statement. I know I can return one or multiple values of a SAML attribute. I have some values that are dependent on the other attribute values. I need to show that relationship. Let us say, the query is for the Subject Dave and the return values are his company and job title. Dave can work at multiple companies with job title at each company. I have two options of sending this data back: Send this as a complextype by defining an attribute organization and return xml within that attribute. <saml:Attribute name="company"> <saml:AttributeValue> <company name="company1" jobtitle="CIO"/> <company name="company2" jobtitle="VP"/> </saml:AttributeValue> Try to send multiple values of attributes somehow sending a reference in attributevalue element. <saml:Attribute name="company"> <attributeValue>company1</attributeValue> <attributeValue>company2</attributeValue> </saml:Attribute> <saml:Attribute name="jobTitle> <attributeValue company="company1">CIO</attributeValue> <attributeValue company="company2">VP</attributeValue> </saml:Attribute> Which approach will you prefer? Why? I am biased towards second approach as it does not require client to know about any schema. It does require them to know about non-standard attribute company in the attribute value.

    Read the article

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Full-text search on App Engine with Whoosh

    - by Martin
    I need to do full text searching with Google App Engine. I found the project Whoosh and it works really well, as long as I use the App Engine Development Environement... When I upload my application to App Engine, I am getting the following TraceBack. For my tests, I am using the example application provided in this project. Any idea of what I am doing wrong? <type 'exceptions.ImportError'>: cannot import name loads Traceback (most recent call last): File "/base/data/home/apps/myapp/1.334374478538362709/hello.py", line 6, in <module> from whoosh import store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/__init__.py", line 17, in <module> from whoosh.index import open_dir, create_in File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/index.py", line 31, in <module> from whoosh import fields, store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/store.py", line 27, in <module> from whoosh import tables File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/tables.py", line 43, in <module> from marshal import loads Here is the import I have in my Python file. # Whoosh ---------------------------------------------------------------------- sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'utils'))) from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT from whoosh.index import getdatastoreindex from whoosh.qparser import QueryParser, MultifieldParser Thank you in advance for your help!

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • git stash blunder:

    - by Chirag Patel
    I did a git stash pop and ended up with merge conflicts. I removed the files from the file system and did a git checkout as shown below, but it thinks the files are still unmerged. I then tried replacing the files and doing a git checkout again and same result. I event tried forcing it with -f flag. Any help would be appreciated! chirag-patels-macbook-pro:haloror patelc75$ git status app/views/layouts/_choose_patient.html.erb: needs merge app/views/layouts/_links.html.erb: needs merge # On branch prod-temp # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: db/schema.rb # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # unmerged: app/views/layouts/_choose_patient.html.erb # unmerged: app/views/layouts/_links.html.erb chirag-patels-macbook-pro:haloror patelc75$ git checkout app/views/layouts/_choose_patient.html.erb error: path 'app/views/layouts/_choose_patient.html.erb' is unmerged chirag-patels-macbook-pro:haloror patelc75$ git checkout -f app/views/layouts/_choose_patient.html.erb warning: path 'app/views/layouts/_choose_patient.html.erb' is unmerged

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >