Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 690/1278 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • allowing bb code but not java script

    - by user1405062
    Hello im trying to get the hang of using bb codes onto my normal php site ( not forum or anything just a normal site ) I have seen a few posts like this one http://www.pixel2life.com/forums/index.php?/topic/10659-php-bbcode-parser/ which says i need a bb parser. I was just wondering can anyone show me how i would use one ? I have a status box were users can update there status but i would like to allow bb codes but not java script. So when im doing my inserting into the db i strip the status like so.. $status= mysql_real_escape_string($_POST['status']); $status2= strip_tags($status); And that stops the java script and tags from getting though but i need the bb code tags to come though but carry on blocking the java script code is there anyway to do this ? Also then i just echo it out echo $status2 ; But just plain text shows so was just wondering if anyone knows how to let though bb code and stop java script and could some one show me how to use the bb parasher ? also need to know how to echo out the bb coding...

    Read the article

  • Relating text fields to check boxes in Java

    - by Finzz
    This program requires the user to login and request a database to access. The program then gets a connection object, searches through the database storing the column names into a vector for later use. The problem comes with implementing text fields to allow the user to search for specific values within the database. I can get the check boxes and text fields to appear using a gridlayout and add them to a panel. How do I relate the text fields to their appropriate check box? I've tried adding them to a vector, but then they can't also be added to the panel as well. I've searched for a way to name the text fields as the loop cycles through the column names, but it seems impossible to do without having them declared ahead of time. This can't be done either, as it's impossible to determine the attributes that the user will request. I just need to be able to know the names of the text fields so I can test to see if the user entered information and perform the necessary logic. Let me know if you have to see the rest of the code to give an answer, but hopefully you get the general idea of what I'm trying to accomplish. Picture of UI: try { ResultSet r2 = con.getMetaData().getColumns("", "", rb.getText(), ""); colNames1 = new Vector<String>(); columns1 = new Vector<JCheckBox>(); while (r2.next()) { colNames1.add(r2.getString(4)); JCheckBox cb = new JCheckBox(r2.getString(4)); JTextField tf = new JTextField(10); columns1.add(cb); p3.add(cb); p3.add(tf); } }

    Read the article

  • Index for wildcard match of end of string

    - by Anders Abel
    I have a table of phone numbers, storing the phone number as varchar(20). I have a requirement to implement searching of both entire numbers, but also on only the last part of the number, so a typical query will be: SELECT * FROM PhoneNumbers WHERE Number LIKE '%1234' How can I put an index on the Number column to make those searchs efficient? Is there a way to create an index that sorts the records on the reversed string? Another option might be to reverse the numbers before storing them, which will give queries like: SELECT * FROM PhoneNumbers WHERE ReverseNumber LIKE '4321%' However that will require all users of the database to always reverse the string. It might be solved by storing both the normal and reversed number and having the reversed number being updated by a trigger on insert/update. But that kind of solution is not very elegant. Any other suggestions?

    Read the article

  • Eclipselink and update trigger on multiple access to the database

    - by Raven
    Hi, in my project I have a database which many clients connect to. Concurrent access and writing works well. The problem now is not to reload the data every second from the database to always have the current status of the data. Does Eclipselink provide a trigger mechanism on (automatically?) reload the data if the database is changed? How would one use this trigger? Thanks!

    Read the article

  • Date/time query from Access table ( last month)

    - by chupeman
    Hello, I am using the query builder from Visual Studio 2008 to extract data from an Access mdb ( 2003), but I can't make it to work with a datetime field. When I run it with a third party query app I have works fine, but when I try to implement it into visual studio I can't do it. What is the correct way to extract last month data? This is what I have: SELECT [Datos].[ID], [Datos].[E-mail Address], [Datos].[ZIP/Postal Code], [Datos].[Store], [Datos].[date], [Datos].[gender], [Datos].[age] FROM [Datos] WHERE ([Datos].[date] =<|Last month|>) Any help is appreciated. Thank you

    Read the article

  • Selecting financial values from db stored as text

    - by Midhat
    I have some financial values stored as text in a mysql db. the significance of financial is that negative numbers are stored enclosed in paranthesis. is there a way to automatically get the numeric value associated with that text. (like '5' shoudl be retuned as 5 and '(5)' should be returned as -5)

    Read the article

  • Select fields containing at least one non-space alphanumeric character

    - by zzapper
    (Sorry I know this is an old chestnut; I have found similar answers here but not an exact answer) These are frequent hand written queries from a console so I is what I am looking for is the easiest thing to type SELECT * FROM tbl_loyalty_card WHERE CUSTOMER_ID REGEXP "[0-9A-Z]"; or SELECT * FROM tbl_loyalty_card WHERE LENGTH(CUSTOMER_ID) >0; -- could match spaces Do you have anything quicker to type even if it's QAD?

    Read the article

  • How can I "merge" or "flatten" results from a query which returns multiple rows into a single result

    - by dsm
    I have a simple query over a table, which returns results like the following: id id_type id_ref 2702 5 31 2702 16 14 2702 17 3 2702 40 1 2702 26 4 And I would like to merge the results into a single row, for instance: id concatenation 2702 5,16,17,40,26:31,14,3,1,4 Is there any way to do this within a trigger? NB: I know I can use a cursor, but I would really prefer not to unless there is no better way.

    Read the article

  • Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 I1,I2,I3 I1,I4 <---- DELETE since I1 exist in antecedent I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I4 I3,I2 is there anyway i can do that by query?

    Read the article

  • Maintaining stored procedures in source control

    - by dub
    How do you guys maintain your stored procedures? I'd like to keep versions of them for a few different reasons. I also will be setting up cruisecontrol.net and nant this weekend to automate builds. I was thinking about coding something that would generate the create scripts for all tables/sprocs/udf/xml schemas in my development database. Then it would take those scripts and update them in source control every couple hours.... Ideally, I'd like to make this some sort of plugin/module for cruisecontrol.net. Any other ideas?

    Read the article

  • Whatz wrong with this MSSQl Query ?

    - by ClixNCash
    Whatz wrong this MSSQl Query : Protected Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Button1.Click Dim SQLData As New System.Data.SqlClient.SqlConnection("Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database.mdf;Integrated Security=True;User Instance=True") Dim cmdSelect As New System.Data.SqlClient.SqlCommand("SELECT COUNT(*) FROM Table1 WHERE Name ='" + TextBox1.Text + "'", SQLData) SQLData.Open() If cmdSelect.ExecuteScalar > 0 Then Label1.Text = "You have already voted this service" Return End If Dim con As New SqlConnection Dim cmd As New SqlCommand con.Open() cmd.Connection = con cmd.CommandText = "INSERT INTO Tabel1 (Name) VALUES('" & Trim(Label1.Text) & "')" cmd.ExecuteNonQuery() Label1.Text = "Thank You !" SQLData.Close() End Sub

    Read the article

  • Using Linq, how to separate a list in to grouped objects by name?

    - by Dr. Zim
    I have a table where a record looks like this varchar(255) Name varchar(255) Text varchar(255) Value Name is the DDL name, Text is what is displayed, and Value is returned upon selection. There are between one and twenty options for each Name. Without iterating though each option like a cursor, is there any way to pull out a list of objects, one for each unique DDL Name, using Linq and C#? A sample of the data: Beds '4 (10)' 4 Beds '5 (1)' 5 Beds '7 (1)' 7 Baths 'NA (13)' NULL Baths '0 (1)' 0 Baths '1 (13)' 1 I was thinking about doing an outer select to get the unique Names, then an inner select to get the list of options for it, then return the set as a List of a set of Lists.

    Read the article

  • 100+ tables to joined

    - by deian
    Hi guys, I was wondering if anyone ever had a change to measure how a would 100 joined tables perform? Each table would have an ID column with primary index and all table are 1:1 related. It is a common problem within many data entry applications where we need to collect 1000+ data points. One solution would be to have one big table with 1000+ columns and the alternative would be to split them into multiple tables and join them when it is necessary. So perhaps more real question would be how 30 tables (30 columns each) would behave with multitable join. 500K-1M rows should be the expected size of the tables. Cheers

    Read the article

  • [C#] How to create a constructor of a class that return a collection of instances of that class?

    - by codemonkie
    My program has the following class definition: public sealed class Subscriber { private subscription; public Subscriber(int id) { using (DataContext dc = new DataContext()) { this.subscription = dc._GetSubscription(id).SingleOrDefault(); } } } ,where _GetSubscription() is a sproc which returns a value of type ISingleResult<_GetSubscriptionResult> Say, I have a list of type List<int> full of 1000 ids and I want to create a collection of subscribers of type List<Subscriber>. How can I do that without calling the constructor in a loop for 1000 times? Since I am trying to avoid switching the DataContext on/off so frequently that may stress the database. TIA.

    Read the article

  • Generating a set of files containing dumps of individual tables in a way that guarantees database co

    - by intuited
    I'd like to dump a MySQL database in such a way that a file is created for the definition of each table, and another file is created for the data in each table. I'd like this to be done in a way that guarantees database integrity by locking the entire database for the duration of the dump. What is the best way to do this? Similarly, what's the best way to lock the database while restoring a set of these dump files? edit I can't assume that mysql will have permission to write to files.

    Read the article

  • count(*) vs count(row-name) - which is more correct?

    - by bread
    Does it make a difference if you do count(*) vs count(row-name) as in these two examples? I have a tendency to always write count(*) because it seems to fit better in my mind with the notion of it being an aggregate function, if that makes sense. But I'm not sure if it's technically best as I tend to see example code written without the * more often than not. count(*): select customerid, count(*), sum(price) from items_ordered group by customerid having count(*) > 1; vs. count(row-name): SELECT customerid, count(customerid), sum(price) FROM items_ordered GROUP BY customerid HAVING count(customerid) > 1;

    Read the article

  • Update all table rows but top N in Mysql

    - by arthurprs
    I was trying to run the following query UPDATE blog_post SET `thumbnail_present`=0, `thumbnail_size`=0, `thumbnail_data`='' WHERE `blog_post` NOT IN ( SELECT `blog_post` FROM blog_post ORDER BY `blog_post` DESC LIMIT 10) But Mysql doesn't allow 'LIMIT' in an 'IN' subquery. I think I can make a select to count the table rows and then make an ordered update limited by 'COUNT - 10', but I was wondering if there is a better way. Thanks in advance.

    Read the article

  • Change update value of property (LINQTOSQL)

    - by Dynde
    Hi... I've got an entity object - Customer, with a property called VATRate. This VATRate is in the form of a decimal (0.25). I wanted to be able to enter a percentage value, and save it in the correct decimal value in the database, so I made this custom property: partial class Customer{ public decimal VatPercent { get{ ... //Get code works fine} set { this.VATRate = (value / 100); } } } And then I just bind this property instead of VATRate in my ASPX editTemplate (formview). This actually works - at least one time, when I debug an update, the value is set correctly one time, and then right after it gets set to the old value. I can't really see why it sets the value twice (and with the old value the second time). Can anyone shed some light on this?

    Read the article

  • How to store data with N columns

    - by iconiK
    I need a way to store an int for N columns. Basically what I have is this: Armies: ArmyID - UINT UnitCount1 - UINT UnitCount2 - UINT UnitCount3 - UINT UnitCount4 - UINT ... I can't possible add a column for each and every unit, so I need a fast way to store the number of each units in an army (you might have guesses it's for a game by now). Using XML is not an option as it will be dead slow.

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >