Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 317/1302 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • search & replace on 3000 row, 25 column spreadsheet

    - by Deca
    I'm attempting to clean up data in this (old) spreadsheet and need to remove things like single and double quotes, HTML tags and so on. Trouble is, it's a 3000 row file with 25 columns and every spreadsheet app I've tried (NeoOffice, MS Excel, Apple Numbers) chokes on it. Hard. Any ideas on how else I can clean this thing up for import to MySQL? Clearly I could go through each record manually, row by row, but would like to avoid that if at all possible. Likewise, I could write a PHP script to handle it on import, but don't want to put the server into a death spiral either.

    Read the article

  • Are Parameters really enough to prevent Sql injections?

    - by Rune Grimstad
    I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks. But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server? There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting. Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string. So instead of doing this: SELECT * FROM Table WHERE Name = 'a name' We do this: SELECT * FROM Table WHERE Name = @Name and then set the value of the @Name parameter on the query / command object.

    Read the article

  • please help me to interpret the naive bayes result in weka..

    - by resmi
    Anybody please help me to interpret the following result generated in weka for classification using naive bayes.....Please explain clearly what is this Normal Distribution , Mean , StandardDev , WeightSum and Precision.Please help me.Am new in weka. ** Naive Bayes Classifier Class Normal: Prior probability = 0.5 1374195_at: Normal Distribution. Mean = 218.06 StandardDev = 6.0572 WeightSum = 3 Precision = 36.34333334 1373315_at: Normal Distribution. Mean = 1142.58 StandardDev = 21.1589 WeightSum = 3 Precision = 126.95333339999999

    Read the article

  • MySQL slow query

    - by andrhamm
    SELECT items.item_id, items.category_id, items.title, items.description, items.quality, items.type, items.status, items.price, items.posted, items.modified, zip_code.state_prefix, zip_code.city, books.isbn13, books.isbn10, books.authors, books.publisher FROM ( ( items LEFT JOIN bookitems ON items.item_id = bookitems.item_id ) LEFT JOIN books ON books.isbn13 = bookitems.isbn13 ) LEFT JOIN zip_code ON zip_code.zip_code = items.item_zip WHERE items.rid = $rid` I am running this query to get the list of a user's items and their location. The zip_code table has over 40k records and this might be the issue. It currently takes up to 15 seconds to return a list of about 20 items! What can I do to make this query more efficient?

    Read the article

  • What is the best scala-like persistence framework available right now?

    - by egervari
    What is the best scala-like persistence framework available right now? Hibernate works, but it's not very scala-like. It insists on using annotations, no-arg constructors, doesn't work with anonymous class instances, doesn't work with scala collections, has an outdated string-based query model, etc. I'm looking for something that really fits Scala. Does it exist? Or do I have to make it?

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • how relate one table to another for future records

    - by Sinan
    I have a games table which holds the data about a game. Then another table which holds the data about news. So far so good. First I thought about creating a junction table for game_news so I could relate news to games. This way works as intended when the game exists. So whenever I insert a news I can relate it to a game using the junction table. However there are cases when there is news about game but the game isn't published and it doesn't exists. So my question would be; is there a way to relate these news to a particular game when the game record is created. What is the best way to do this? Any ideas?

    Read the article

  • PDO update with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • Concurency problem with Isolation - read-committed

    - by Ratn Deo--Dev
    I have to write a simple demo for amount withdrawl from a joint Bank amount .Andy and Jen holds a joint bank account with number 123 . Suppose they have 100$ in their account .Jen and Andy are operating their account at the same time and both are trying to withdraw 90$ at the time being .My transaction Isolation is set to read-committed and both are able to withdraw money leaving the balance to -(minus)80$ although I have constraint that balance should never be less than 0. I am using hibernate .Is versioning only way to solve this problem or I should go for another Isolation level ?

    Read the article

  • IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • Strategy for Offline/Online data synchronization

    - by Adi
    My requirement is I have server J2EE web application and client J2EE web application. Sometimes client can go offline. When client comes online he should be able to synchronize changes to and fro. Also I should be able to control which rows/tables need to be synchronized based on some filters/rules. Is there any existing Java frameworks for doing it? If I need to implement on my own, what are the different strategies that you can suggest? One solution in my mind is maintaining sql logs and executing same statements at other side during synchronization. Do you see any problems with this strategy?

    Read the article

  • Complex SQL query... 3 tables and need the most popular in the last 24 hours using timestamps!

    - by Stefan
    Hey guys, I have 3 tables with a column in each which relates to one ID per row. I am looking for an sql statement query which will check all 3 tables for any rows in the last 24 hours (86400 seconds) i have stored timestamps in each tables under column time. After I get this query I will be able to do the next step which is to then check to see how many of the ID's a reoccurring so I can then sort by most popular in the array and limit it to the top 5... Any ideas welcome! :) Thanks in advanced. Stefan

    Read the article

  • Autoincrement uniqueidentifier in C#

    - by drasto
    Basically I want to use uniqueidentifier in similar way as identity. I don't want to insert values into it, It should just insert values automatically, different value for each row. I'm not able to set autoincrement on columns of type uniqueidentifier(the property 'autoincrement' is set to false and is not editable).

    Read the article

  • PostgreSQL 8.3 data types: xml vs varchar

    - by Sejanus
    There's xml data type in Postgres, I never used it before so I'd like to hear opinions. Downsides and upsides vs using regular varchar (or Text) column to store xml. The text I'm going to store is xml, well-formed, UTF-8. No need to search by it (I've read searching by xml is slow). This XML actually is data prepared for PDF generation with Apache FOP. XML can be generated dynamically from data found elsewhere (other Postgres tables), it's stored as is only so that I won't need to generate it twice. Kinda backup#2 for already generated PDF documents. Anything else to know? Good practices, performance, maintenance, etc?

    Read the article

  • Non-Relational DBMS Design Resources

    - by Matt Luongo
    Hey guys, As a personal project, I'm looking to build a rudimentary DBMS. I've read the relevant sections in Elmasri & Navathe (5ed), but could use a more focused text. The rub is that I want to play with novel non-relational data models. While a lot of E&N was great- indexing implementation details in particular- the more advanced DBMS implementation was only targeted to a relational model. I could also use something a bit more practical and detail-oriented, with real-world recommendations. I'd like to defer staring at DBMS source for a while if I can. Any ideas?

    Read the article

  • SQL Select * from multiple tables

    - by zaf
    Using PHP/PDO is it possible to use a wildcard for the columns when a select is done on multiple tables and the returned array keys are fully qualified to avoid column name clash? example: SELECT * from table1, table2; gives: Array keys are 'table1.id', 'table2.id', 'table1.name' etc. I tried "SELECT table1.*,table2.* ..." but the returned array keys were not fully qualified so columns with the same name clashed and were overwritten.

    Read the article

  • ASP.NET configure data source is not returning anything?

    - by Greg McNulty
    I'm selecting table data of the current user: SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID) I have set a control to the unique user ID and it is getting the correct value: HTML: <asp:Label ID="userID" runat="server" Text="Labeluser"></asp:Label> C#: userID.Text = Membership.GetUser().ProviderUserKey.ToString(); I then use it in the where clause using the Configure Data Source window unique ID = control then controlID userID (fills in .text for me) I compile and run but nothing shows up where the table should be. Any suggestions? Here is the code it has created: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="ConfidenceLevel" HeaderText="ConfidenceLevel" SortExpression="ConfidenceLevel" /> <asp:BoundField DataField="LoveLevel" HeaderText="LoveLevel" SortExpression="LoveLevel" /> <asp:BoundField DataField="HappinessLevel" HeaderText="HappinessLevel" SortExpression="HappinessLevel" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionStringToDB %>" SelectCommand="SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID)"> <SelectParameters> <asp:ControlParameter ControlID="userID" Name="UserProfileID" PropertyName="Text" Type="Object" /> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • How can I write this query in Django? (datetime)

    - by alex
    | time_before | datetime | YES | MUL | NULL | | | time_after | datetime | YES | MUL | NULL | | the_tag = Tag.objects.get(id=tag_id) Log.objects.filter(blah).extra(where=['last_updated >'+the_tag.time_before, 'last_updated' < the_tag.time_after]) Ok. Basically, I have an object that's called "the_tag". I want to select from Log where log.last_updated (which is a datetime field) is between the tag's time. But, I don't know how to write the last part of this Django query.

    Read the article

  • How to make this sub-sub-query work?

    - by Josh Weissbock
    I am trying to do this in one query. I asked a similar question a few days ago but my personal requirements have changed. I have a game type website where users can attend "classes". There are three tables in my DB. I am using MySQL. I have four tables: hl_classes (int id, int professor, varchar class, text description) hl_classes_lessons (int id, int class_id, varchar lessonTitle, varchar lexiconLink, text lessonData) hl_classes_answers (int id, int lesson_id, int student, text submit_answer, int percent) hl_classes stores all of the classes on the website. The lessons are the individual lessons for each class. A class can have infinite lessons. Each lesson is available in a specific term. hl_classes_terms stores a list of all the terms and the current term has the field active = '1'. When a user submits their answers to a lesson it is stored in hl_classes_answers. A user can only answer each lesson once. Lessons have to be answered sequentially. All users attend all "classes". What I am trying to do is grab the next lesson for each user to do in each class. When the users start they are in term 1. When they complete all 10 lessons in each class they move on to term 2. When they finish lesson 20 for each class they move on to term 3. Let's say we know the term the user is in by the PHP variable $term. So this is my query I am currently trying to massage out but it doesn't work. Specifically because of the hC.id is unknown in the WHERE clause SELECT hC.id, hC.class, (SELECT MIN(output.id) as nextLessonID FROM ( SELECT id, class_id FROM hl_classes_lessons hL WHERE hL.class_id = hC.id ORDER BY hL.id LIMIT $term,10 ) as output WHERE output.id NOT IN (SELECT lesson_id FROM hl_classes_answers WHERE student = $USER_ID)) as nextLessonID FROM hl_classes hC My logic behind this query is first to For each class; select all of the lessons in the term the current user is in. From this sort out the lessons the user has already done and grab the MINIMUM id of the lessons yet to be done. This will be the lesson the user has to do. I hope I have made my question clear enough.

    Read the article

  • SQL Backup files, distinguish partial and full backup files

    - by ccook
    I have scheduled backups running through SQL Agent, with Full Backups nightly, and differential backups hourly. Is there a way to determine which of the backup files is the Full backup, and which is the latest differential? Intuitively, it would seem the largest backup within 24 hours is the full, and the latest smaller backup is the partial. However, this isn't robust. Is there a way to probe the backup file to check the backup type? (Preferably in c#)

    Read the article

  • What are the benefits of left outer join vs nested aggregate selects to find the newest rows in a table?

    - by RenderIn
    I'm doing: select * from mytable y where y.year = (select max(yi.year) from mytable yi where yi.person = y.person) Is that better or worse from a performance aspect than: select y.* from mytable y left outer join mytable y2 on y.year < y2.year and y.person = y2.person where y2.year is null The explain plan/anecdotal evidence is inconclusive so I am wondering if in general one is better than the other.

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • KD-Trees and missing values (vector comparison)

    - by labratmatt
    I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored. I have a couple of obvious solutions, but I'm not sure either one seem very good: Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z. Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product. This has to be a common problem, any suggestions? Thanks for the help.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >