Search Results

Search found 18661 results on 747 pages for 'linq to mysql'.

Page 377/747 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • PasswordFiled char[] to String in Java connection MySql?

    - by user1819551
    This is a jFrame to connect to the database and this is in the button connect. My issue is in the passwordField NetBeans make me do a char[], but my .getConnection not let me insert the char[] ERROR: "no suitable method found for getConnection(String,String,char[])". So I will change to String right? So when I change and run the jFrame said access denied. when I start doing the System.out.println(l) " Give me the right answer" Like this: "Alex". But when I do the System.out.println(password) "Give me the Array spaces and not the value" Like this: jdbc:mysql://localhost/home inventory root [C@5be5ab68 <--- Array space . What I doing wrong? try { Class.forName("org.gjt.mm.mysql.Driver"); //Load the driver String host = "jdbc:mysql://"+tServerHost.getText()+"/"+tSchema.getText(); String uName = tUsername.getText(); char[] l = pPassword.getPassword(); System.out.println(l); String password= l.toString(); System.out.println(host+uName+l); con = DriverManager.getConnection(host, uName, password); System.out.println(host+uName+password); } catch (SQLException | ClassNotFoundException ex) { JOptionPane.showMessageDialog(this, ex.getMessage()); } }

    Read the article

  • AspxGridView Specified method is not supported. problem

    - by shamim
    Bellow is my .aspx aspxGridview syntax <dx:ASPxGridView ID="ASPxGridView1" runat="server" AutoGenerateColumns="False" KeyFieldName="intProductCode" onrowinserted="ASPxGridView1_RowInserted"> <Columns> <dx:GridViewCommandColumn VisibleIndex="0"> <EditButton Visible="True"> </EditButton> <NewButton Visible="True"> </NewButton> <DeleteButton Visible="True"> </DeleteButton> </dx:GridViewCommandColumn> <dx:GridViewDataTextColumn Caption="intProductCode" FieldName="intProductCode" VisibleIndex="1"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="strProductName" FieldName="strProductName" VisibleIndex="2"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="SKU" FieldName="SKU" VisibleIndex="3"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="PACK" FieldName="PACK" VisibleIndex="4"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="intQtyPerCase" FieldName="intQtyPerCase" VisibleIndex="5"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="mnyCasePrice" FieldName="mnyCasePrice" VisibleIndex="6"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="intTBQtyPerCase" FieldName="intTBQtyPerCase" VisibleIndex="7"> </dx:GridViewDataTextColumn> <dx:GridViewDataCheckColumn Caption="bIsActive" FieldName="bIsActive" VisibleIndex="8"> </dx:GridViewDataCheckColumn> <dx:GridViewDataTextColumn Caption="intSortingOrder" FieldName="intSortingOrder" VisibleIndex="9"> </dx:GridViewDataTextColumn> <dx:GridViewDataTextColumn Caption="strProductAccCode" FieldName="strProductAccCode" VisibleIndex="10"> </dx:GridViewDataTextColumn> </Columns> </dx:ASPxGridView> Bellow is my C# syntax : protected void Page_Load(object sender, EventArgs e) { if (this.IsPostBack != true) { BindGridView(); } } private void BindGridView() { DB_OrderV2DataContext db = new DB_OrderV2DataContext(); var r = from p in db.tblProductInfos select p; ASPxGridView1.DataSource = r; ASPxGridView1.DataBind(); } protected void LinqServerModeDataSource1_Selecting(object sender, DevExpress.Data.Linq.LinqServerModeDataSourceSelectEventArgs e) { DB_OrderV2DataContext db = new DB_OrderV2DataContext(); var r= from p in db.tblProductInfos select p; e.QueryableSource = r; } protected void ASPxGridView1_RowInserted(object sender, DevExpress.Web.Data.ASPxDataInsertedEventArgs e) { DB_OrderV2DataContext db = new DB_OrderV2DataContext(); tblProductInfo otblProductInfo = new tblProductInfo (); otblProductInfo.intProductCode = (db.tblProductInfos.Max(p => (int?)p.intProductCode) ?? 0) + 1;//oProductController.GenerateProductCode(); otblProductInfo.strProductName = Convert.ToString(e.NewValues["strProductName"]); otblProductInfo.SKU = Convert.ToString(e.NewValues["SKU"]); otblProductInfo.PACK = Convert.ToString(e.NewValues["PACK"]); otblProductInfo.intQtyPerCase = Convert.ToInt32(e.NewValues["intQtyPerCase"]); otblProductInfo.mnyCasePrice = Convert.ToDecimal(e.NewValues["mnyCasePrice"]); otblProductInfo.intTBQtyPerCase = Convert.ToInt32(e.NewValues["intTBQtyPerCase"]); otblProductInfo.bIsActive = Convert.ToBoolean(e.NewValues["bIsActive"]); otblProductInfo.intSortingOrder = (db.tblProductInfos.Max(p => (int?)p.intSortingOrder) ?? 0) + 1;//oProductController.GenerateSortingOrder(); db.tblProductInfos.InsertOnSubmit(otblProductInfo);//the InsertOnSubmit method called in the preceding code was named Add and the DeleteOnSubmit method was named Remove. db.SubmitChanges(); BindGridView(); //oProductController.InsertAndSubmit(); // ASPxGridView1.DataBind(); } My SQL syntax CREATE TABLE [dbo].[tblProductInfo]( [intProductCode] [int] NOT NULL, [strProductName] [varchar](100) NULL, [SKU] [varchar](50) NULL, [PACK] [varchar](50) NULL, [intQtyPerCase] [int] NULL, [mnyCasePrice] [money] NULL, [intTBQtyPerCase] [int] NULL, [bIsActive] [bit] NULL, [intSortingOrder] [int] NULL, [strProductAccCode] [varchar](max) NULL, CONSTRAINT [PK_tblProductInfo] PRIMARY KEY CLUSTERED ( [intProductCode] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] When i want to insert ,show me error message Specified method is not supported. How to solve it.

    Read the article

  • Generated LinqtoSql Sql 5x slower than SAME EXACT hand-written sql

    - by JasonM
    I have a sql statement which is hardcoded in an existing VB6 app. I'm upgrading a new version in C# and using Linq To Sql. I was able to get LinqToSql to generate the same sql (before I start refactoring), but for some reason the Sql generated by LinqToSql is 5x slower than the original sql. This is running the generated Sql Directly in LinqPad. The only real difference my meager sql eyes can spot is the WITH (NOLOCK), which if I add into the LinqToSql generated sql, makes no difference. Can someone point out what I'm doing wrong here? Thanks! Existing Hard Coded Sql (5.0 Seconds) SELECT DISTINCT CH.ClaimNum, CH.AcnProvID, CH.AcnPatID, CH.TinNum, CH.Diag1, CH.GroupNum, CH.AllowedTotal FROM Claims.dbo.T_ClaimsHeader AS CH WITH (NOLOCK) WHERE CH.ContractID IN ('123A','123B','123C','123D','123E','123F','123G','123H') AND ( ( (CH.Transmited Is Null or CH.Transmited = '') AND CH.DateTransmit Is Null AND CH.EobDate Is Null AND CH.ProcessFlag IN ('Y','E') AND CH.DataSource NOT IN ('A','EC','EU') AND CH.AllowedTotal > 0 ) ) ORDER BY CH.AcnPatID, CH.ClaimNum Generated Sql from LinqToSql (27.6 Seconds) -- Region Parameters DECLARE @p0 NVarChar(4) SET @p0 = '123A' DECLARE @p1 NVarChar(4) SET @p1 = '123B' DECLARE @p2 NVarChar(4) SET @p2 = '123C' DECLARE @p3 NVarChar(4) SET @p3 = '123D' DECLARE @p4 NVarChar(4) SET @p4 = '123E' DECLARE @p5 NVarChar(4) SET @p5 = '123F' DECLARE @p6 NVarChar(4) SET @p6 = '123G' DECLARE @p7 NVarChar(4) SET @p7 = '123H' DECLARE @p8 VarChar(1) SET @p8 = '' DECLARE @p9 NVarChar(1) SET @p9 = 'Y' DECLARE @p10 NVarChar(1) SET @p10 = 'E' DECLARE @p11 NVarChar(1) SET @p11 = 'A' DECLARE @p12 NVarChar(2) SET @p12 = 'EC' DECLARE @p13 NVarChar(2) SET @p13 = 'EU' DECLARE @p14 Decimal(5,4) SET @p14 = 0 -- EndRegion SELECT DISTINCT [t0].[ClaimNum], [t0].[acnprovid] AS [AcnProvID], [t0].[acnpatid] AS [AcnPatID], [t0].[tinnum] AS [TinNum], [t0].[diag1] AS [Diag1], [t0].[GroupNum], [t0].[allowedtotal] AS [AllowedTotal] FROM [Claims].[dbo].[T_ClaimsHeader] AS [t0] WHERE ([t0].[contractid] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)) AND (([t0].[Transmited] IS NULL) OR ([t0].[Transmited] = @p8)) AND ([t0].[DATETRANSMIT] IS NULL) AND ([t0].[EOBDATE] IS NULL) AND ([t0].[PROCESSFLAG] IN (@p9, @p10)) AND (NOT ([t0].[DataSource] IN (@p11, @p12, @p13))) AND ([t0].[allowedtotal] > @p14) ORDER BY [t0].[acnpatid], [t0].[ClaimNum] New LinqToSql Code (30+ seconds... Times out ) var contractIds = T_ContractDatas.Where(x => x.EdiSubmissionGroupID == "123-01").Select(x => x.CONTRACTID).ToList(); var processFlags = new List<string> {"Y","E"}; var dataSource = new List<string> {"A","EC","EU"}; var results = (from claims in T_ClaimsHeaders where contractIds.Contains(claims.contractid) && (claims.Transmited == null || claims.Transmited == string.Empty ) && claims.DATETRANSMIT == null && claims.EOBDATE == null && processFlags.Contains(claims.PROCESSFLAG) && !dataSource.Contains(claims.DataSource) && claims.allowedtotal > 0 select new { ClaimNum = claims.ClaimNum, AcnProvID = claims.acnprovid, AcnPatID = claims.acnpatid, TinNum = claims.tinnum, Diag1 = claims.diag1, GroupNum = claims.GroupNum, AllowedTotal = claims.allowedtotal }).OrderBy(x => x.ClaimNum).OrderBy(x => x.AcnPatID).Distinct(); I'm using the list of constants above to make LinqToSql Generate IN ('xxx','xxx',etc) Otherwise it uses subqueries which are just as slow...

    Read the article

  • Is it possible to create the following XML using Xdocument and append the relevant protion in subseq

    - by Newbie
    <?xml version='1.0' encoding='UTF-8'?> <StockMarket> <StockDate Day = "02" Month="06" Year="2010"> <Stock> <Symbol>ABC</Symbol> <Amount>110.45</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>366.25</Amount> </Stock> </StockDate> <StockDate Day = "03" Month="06" Year="2010"> <Stock> <Symbol>ABC</Symbol> <Amount>110.35</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>369.70</Amount> </Stock> </StockDate> </StockMarket> My approach so far is XDocument doc = new XDocument( new XElement("StockMarket", new XElement("StockDate", new XAttribute("Day", "02"),new XAttribute("Month","06"),new XAttribute("Year","2010")), new XElement("Stock", new XElement("Symbol", "ABC"), new XElement("Amount", "110.45")) ) ); Since I am new to Linq to XML, I am presently struggling a lot and henceforth seeking for help. What I need is to generate the above XML programatically(I mean to say that some kind of looping...) The values will be passed from textbox and hence will be filled at runtime. At present the one which I have done is a kind of static one. This entire stuff has to be done at runtime. One more problem that I am facing is at the time of saving the record for the second time. Suppose first time I executed the code and saved it(Using the program I have done). Next time when I will try to save then only <StockDate Day = "xx" Month="xx" Year="xxxx"> <Stock> <Symbol>ABC</Symbol> <Amount>110.45</Amount> </Stock> <Stock> <Symbol>XYZ</Symbol> <Amount>366.25</Amount> </Stock> </StockDate> should get save(or better appended) and not <StockMarket> .... </StockMarket>.. Because that will be created only at the first time when the XML will be generated or created. I hope I am able to convey my problem properly. If you people is having difficulty in understanding my situation, kindly let me know. Using C#3.0 . Thanks

    Read the article

  • MySQL2 Ruby gem will not Install 10.6

    - by Kish
    I know this has been asked several times, but I searched and I tried many different things and nothing worked. ERROR: Error installing mysql2: ERROR: Failed to build gem native extension. /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for mysql.h... yes checking for errmsg.h... yes checking for mysqld_error.h... yes creating Makefile make gcc -I. -I/Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/x86_64-darwin10.6.0 -I/Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/backward -I/Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1 -I. -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_MYSQL_H -DHAVE_ERRMSG_H -DHAVE_MYSQLD_ERROR_H -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -I/usr/local/mysql/include -Os -g -fno-common -fno-strict-aliasing -arch i386 -fno-common -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wshorten-64-to-32 -Wno-long-long -fno-common -pipe -Wall -funroll-loops -o client.o -c client.c In file included from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby.h:32, from ./mysql2_ext.h:4, from client.c:1: /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/ruby.h:108: error: size of array ‘ruby_check_sizeof_long’ is negative /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/ruby.h:112: error: size of array ‘ruby_check_sizeof_voidp’ is negative In file included from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/intern.h:29, from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/ruby.h:1327, from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby.h:32, from ./mysql2_ext.h:4, from client.c:1: /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/st.h:69: error: size of array ‘st_check_for_sizeof_st_index_t’ is negative make: *** [client.o] Error 1 Gem files will remain installed in /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/mysql2-0.2.6 for inspection. Results logged to /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/mysql2-0.2.6/ext/mysql2/gem_make.out

    Read the article

  • Instantiating a list of parameterized types, making beter use of Generics and Linq

    - by DanO
    I'm hashing a file with one or more hash algorithms. When I tried to parametrize which hash types I want, it got a lot messier than I was hoping. I think I'm missing a chance to make better use of generics or LINQ. I also don't like that I have to use a Type[] as the parameter instead of limiting it to a more specific set of type (HashAlgorithm descendants), I'd like to specify types as the parameter and let this method do the constructing, but maybe this would look better if I had the caller new-up instances of HashAlgorithm to pass in? public List<string> ComputeMultipleHashesOnFile(string filename, Type[] hashClassTypes) { var hashClassInstances = new List<HashAlgorithm>(); var cryptoStreams = new List<CryptoStream>(); FileStream fs = File.OpenRead(filename); Stream cryptoStream = fs; foreach (var hashClassType in hashClassTypes) { object obj = Activator.CreateInstance(hashClassType); var cs = new CryptoStream(cryptoStream, (HashAlgorithm)obj, CryptoStreamMode.Read); hashClassInstances.Add((HashAlgorithm)obj); cryptoStreams.Add(cs); cryptoStream = cs; } CryptoStream cs1 = cryptoStreams.Last(); byte[] scratch = new byte[1 << 16]; int bytesRead; do { bytesRead = cs1.Read(scratch, 0, scratch.Length); } while (bytesRead > 0); foreach (var stream in cryptoStreams) { stream.Close(); } foreach (var hashClassInstance in hashClassInstances) { Console.WriteLine("{0} hash = {1}", hashClassInstance.ToString(), HexStr(hashClassInstance.Hash).ToLower()); } }

    Read the article

  • ASP.NET MVC 2 validation LINQ to SQL

    - by Chino
    Currently I have a DataModel object which contains my linq to sql classes(a dmbl file). Currently I use a partial class to validate the incoming input. For example public partial class User : IEntity { public NameValueCollection CheckModel() { return GetRuleViolations(); } /// <summary> /// Method validates incoming data, by given rules in the if statement. /// </summary> /// <returns>NameValueCollection</returns> private NameValueCollection GetRuleViolations() { NameValueCollection errors = new NameValueCollection(); if (string.IsNullOrEmpty(Username)) errors.Add("Username", "A username is required"); // and so on return errors; } } Now what I want to try to do is add validation attributes to the fields. For example I want to try to add the required attribute to the field Username instead/in addtion of using the validation I currently have. My question is how can I achieve this because the dmbl file is auto generated. Or maybe it is not possible and should I use a different approach?

    Read the article

  • Cannot turn off autocommit in a script using the Django ORM

    - by Wes
    I have a command line script that uses the Django ORM and MySQL backend. I want to turn off autocommit and commit manually. For the life of me, I cannot get this to work. Here is a pared down version of the script. A row is inserted into testtable every time I run this and I get this warning from MySQL: "Some non-transactional changed tables couldn't be rolled back". #!/usr/bin/python import os import sys django_dir = os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))) sys.path.append(django_dir) os.environ['DJANGO_DIR'] = django_dir os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' from django.core.management import setup_environ from myproject import settings setup_environ(settings) from django.db import transaction, connection cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into testtable values (\'X\')') cursor.execute('rollback') I also tried placing the insert in a function and adding Django's commit_manually wrapper, like so: @transaction.commit_manually def myfunction(): cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into westest values (\'X\')') cursor.execute('rollback') myfunction() I also tried setting DISABLE_TRANSACTION_MANAGEMENT = True in settings.py, with no further luck. I feel like I am missing something obvious. Any help you can give me is greatly appreciated. Thanks!

    Read the article

  • Call to undefined function query() - using PDO in PHP

    - by webworm
    I have a written a script in PHP that reads from data from a MySQL database. I am new to PHP and I have been using the PDO library. I have been developing on a Windows machine using the WAMP Server 2 and all has been working well. However, when I uploaded my script to the LINUX server where it will be used I get the following error when I run my script. Fatal error: Call to undefined function query() This is the line where the error is occuring ... foreach($dbconn->query($sql) as $row) The variable $dbconn is first defined in my dblogin.php include file which I am listing below. <?php // Login info for the database $db_hostname = 'localhost'; $db_database = 'MY_DATABASE_NAME'; $db_username = 'MY_DATABASE_USER'; $db_password = 'MY_DATABASE_PASSWORD'; $dsn = 'mysql:host=' . $db_hostname . ';dbname=' . $db_database . ';'; try { $dbconn = new PDO($dsn, $db_username, $db_password); $dbconn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { echo 'Error connecting to database: ' . $e->getMessage(); } ?> Inside the function where the error occurs I have the database connection defined as a superglobal as such ... global $dbconn; I am a bit confused as to what is happening since all worked well on my development machine. I was wondering if the PDO library was installed but from what I thought it was installed by default as part of PHP v5. The script is running on an Ubuntu (5.0.51a-3ubuntu5.4) machine and the PHP version is 5.2.4. Thank you for any suggestions. I am really lost on this.

    Read the article

  • PHP's PDO Prepare Method Fails While in a Loop

    - by Andrew G. Johnson
    Given the following code: // Connect to MySQL up here $example_query = $database->prepare('SELECT * FROM table2'); if ($example_query === false) die('prepare failed'); $query = $database->prepare('SELECT * FROM table1'); $query->execute(); while ($results = $query->fetch()) { $example_query = $database->prepare('SELECT * FROM table2'); if ($example_query === false) die('prepare failed'); //it will die here } I obviously attempt to prepare statements to SELET everything from table2 twice. The second one (the one in the WHILE loop) always fails and causes an error. This is only on my production server, developing locally I have no issues so it must be some kind of setting somewhere. My immediate thought is that MySQL has some kind of max_connections setting that is set to 1 and a connection is kept open until the WHILE loop is completed so when I try to prepare a new query it says "nope too many connected already" and shits out. Any ideas? EDIT: Yes I know there's no need to do it twice, in my actual code it only gets prepared in the WHILE loop, but like I said that fails on my production server so after some testing I discovered that a simple SELECT * query fails in the WHILE loop but not out of it. The example code I gave is obviously very watered down to just illustrate the issue.

    Read the article

  • PHP File upload fails with blank screen when size exceeds 742 KB.

    - by user284839
    Hi friends, This is one of the most bizarre errors I've come across. So I've written a little file upload web app for my friend and it works fine for any file less than or equal to 742kB in size. Needless to say, I arrived at this precise number based on relentless testing. Weird part is that if the file size is just a few KB more, for example 743 or 750, I get an error saying "MySQL has gone away". But if it is 1MB or more, then I just get a blank screen. And it happens in less than 2 seconds after I hit the upload button. So it doesn't look like a time-out to me. I checked out the PHP.ini file for post size and upload size, they are all set to 5 MB or more. And the timeout is set to 60 seconds. The uploaded file sits in MySQL database in a field of datatype mediumblob. I tried changing that to longblob. But that didn't help either. Any help? Thanks for reading, Girish

    Read the article

  • Request for the permission of type 'System.Data.Odbc.OdbcPermission.. help needed

    - by Matt
    I'm getting the following error when trying to connect to a remote mysql server. Request for the permission of type 'System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've installed the odbc 5.1 driver, and can connect to the database using the Data Sources (ODBC) tool in Control Panel. However when I try and run my C# scrip to connect, I get the above error. I've read its something to do with trust levels or something? I don't quite understand what people were talking about though. I went to C:... Framework/v2.0.50727/CONFIG and added to the medium and high trust.config files, but that didn't help.. Can someone help me out here please? My connection string is MyConString = "DRIVER={MySQL ODBC 5.1 Driver};" + "SERVER=" + m_strHost + ";" + "PORT=3306;" + "DATABASE=" + m_strDatabase + ";" + "UID=" + m_strUserName + ";" + "PWD=" + m_strPassword + ";" + "OPTION=3;";

    Read the article

  • What is the proper query to get all the children in a tree?

    - by Nathan Adams
    Lets say I have the following MySQL structure: CREATE TABLE `domains` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `domain` CHAR(50) NOT NULL, `parent` INT(11) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MYISAM AUTO_INCREMENT=10 DEFAULT CHARSET=latin1 insert into `domains`(`id`,`domain`,`parent`) values (1,'.com',0); insert into `domains`(`id`,`domain`,`parent`) values (2,'example.com',1); insert into `domains`(`id`,`domain`,`parent`) values (3,'sub1.example.com',2); insert into `domains`(`id`,`domain`,`parent`) values (4,'sub2.example.com',2); insert into `domains`(`id`,`domain`,`parent`) values (5,'s1.sub1.example.com',3); insert into `domains`(`id`,`domain`,`parent`) values (6,'s2.sub1.example.com',3); insert into `domains`(`id`,`domain`,`parent`) values (7,'sx1.s1.sub1.example.com',5); insert into `domains`(`id`,`domain`,`parent`) values (8,'sx2.s2.sub1.example.com',6); insert into `domains`(`id`,`domain`,`parent`) values (9,'x.sub2.example.com',4); In my mind that is enough to emulate a simple tree structure: .com | example / \ sub1 sub2 ect My problem is that give sub1.example.com I want to know all the children of sub1.example.com without using multiple queries in my code. I have tried joining the table to itself and tried to use subqueries, I can't think of anything that will reveal all the children. At work we are using MPTT to keep in hierarchal order a list of domains/subdomains however, I feel that there is an easier way to do it. I did some digging and someone did something similar but they required the use of a function in MySQL. I don't think for something simple like this we would need a whole function. Maybe I am just dumb and not seeing some sort of obvious solution. Also, feel free to alter the structure.

    Read the article

  • Querying a self referencing join with NHibernate Linq

    - by Ben
    In my application I have a Category domain object. Category has a property Parent (of type category). So in my NHibernate mapping I have: <many-to-one name="Parent" column="ParentID"/> Before I switched to NHibernate I had the ParentId property on my domain model (mapped to the corresponding database column). This made it easy to query for say all top level categories (ParentID = 0): where(c => c.ParentId == 0) However, I have since removed the ParentId property from my domain model (because of NHibernate) so I now have to do the same query (using NHibernate.Linq) like so: public IList<Category> GetCategories(int parentId) { if (parentId == 0) return _catalogRepository.Categories.Where(x => x.Parent == null).ToList(); else return _catalogRepository.Categories.Where(x => x.Parent.Id == parentId).ToList(); } The real impact that I can see, is the sql generated. Instead of a simple 'select x,y,z from categories where parentid = 0' NHibernate generates a left outer join: SELECT this_.CategoryId as CategoryId4_1_, this_.ParentID as ParentID4_1_, this_.Name as Name4_1_, this_.Slug as Slug4_1_, parent1_.CategoryId as CategoryId4_0_, parent1_.ParentID as ParentID4_0_, parent1_.Name as Name4_0_, parent1_.Slug as Slug4_0_ FROM Categories this_ left outer join Categories parent1_ on this_.ParentID = parent1_.CategoryId WHERE this_.ParentID is null Which doesn't seems much less efficient that what I had before. Is there a better way of querying these self referencing joins as it's very tempting to drop the ParentID back onto my domain model for this reason. Thanks, Ben

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Call Multiple Stored Procedures with the Zend Framework

    - by Brian Fisher
    I'm using Zend Framework 1.7.2, MySQL and the MySQLi PDO adapter. I would like to call multiple stored procedures during a given action. I've found that on Windows there is a problem calling multiple stored procedures. If you try it you get the following error message: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. I found that to work around this issue I could just close the connection to the database after each call to a stored procedure: if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') { //If on windows close the connection $db->closeConnection(); } This has worked well for me, however, now I want to call multiple stored procedures wrapped in a transaction. Of course, closing the connection isn't an option in this situation, since it causes a rollback of the open transaction. Any ideas, how to fix this problem and/or work around the issue. More info about the work around Bug report about the problem

    Read the article

  • Why do Unicode characters show up properly in database, but as ? when printed in Java via Hibernate?

    - by lupefiasco
    I'm writing a webapp, and interfacing with MySQL using Hibernate 3.5. Using "?????? ?????????" as my test string, I can input the string and see that it is properly persisted into the database. However, when I later pull the value out of the database and print to the console as a String, I see "?????? ?????????". If I use new OutputStreamWriter(System.out,"UTF-8"); then I get "„Éá„Çp„ÇØ„Éà„ÉÉ„Éó ·Éò·Éú·Éí·Éö·Éò·É°·É£·É†·Éò"". Why don't I see the original string? These are my hibernate.cfg.xml settings: <property name="hibernate.connection.useUnicode"> true </property> <property name="hibernate.connection.characterEncoding"> UTF-8 </property> <property name="hibernate.connection.charSet"> UTF-8 </property> and this is my database connection string: hibernate.connection.url = jdbc:mysql://localhost/mydatabase?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8

    Read the article

  • Is it possible to convert a 40-character SHA1 hash to a 20-character SHA1 hash?

    - by ewitch
    My problem is a bit hairy, and I may be asking the wrong questions, so please bear with me... I have a legacy MySQL database which stores the user passwords & salts for a membership system. Both of these values have been hashed using the Ruby framework - roughly like this: hashedsalt = Digest::SHA1.hexdigest("--#{Time.now.to_s}--#{login}--") hashedpassword = Digest::SHA1.hexdigest("#{hashedsalt}:#{password}") So both values are stored as 40-character strings (varchar(40)) in MySQL. Now I need to import all of these users into the ASP.NET membership framework for a new web site, which uses a SQL Server database. It is my understanding that the the way I have ASP.NET membership configured, the user passwords and salts are also stored in the membership database (in table aspnet_Membership) as SHA1 hashes, which are then Base64 encoded (see here for details) and stored as nvarchar(128) data. But from the length of the Base64 encoded strings that are stored (28 characters) it seems that the SHA1 hashes that ASP.NET membership generates are only 20 characters long, rather than 40. From some other reading I have been doing I am thinking this has to do with the number of bits per character/character set/encoding or something related. So is there some way to convert the 40-character SHA1 hashes to 20-character hashes which I can then transfer to the new ASP.NET membership data table? I'm pretty familiar with ASP.NET membership by now but I feel like I'm just missing this one piece. However, it may also be known that SHA1 in Ruby and SHA1 in .NET are incompatible, so I'm fighting a losing battle... Thanks in advance for any insight.

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco CMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • iterate through fb:random

    - by fusion
    does anyone know how i would fill data from mysql database in fb:random and iterate through it, to pick a random quote? fb:random $facebook->api_client->fbml_setRefHandle('quotes', '<fb:random> <fb:random-option>Quote 1</fb:random-option> <fb:random-option>Quote 2</fb:random-option> </fb:random>'); mysql data: $rowcount = mysql_result($result, 0, 0); $rand = rand(0,$rowcount-1); $result = mysql_query("SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes LIMIT $rand, 1", $conn) or die ('Error: '.mysql_error()); $row = mysql_fetch_array($result, MYSQL_ASSOC); if ( !$row ) { echo "Empty"; } else{ $fb_box = "<p>" . h($row['cArabic']) . "</p>"; $fb_box .= "<p>" . h($row['cQuotes']) . "</p>"; $fb_box .= "<p>" . h($row['vAuthor']) . "</p>"; $fb_box .= "<p>" . h($row['vReference']) . "</p>"; }

    Read the article

  • How to set SQL_BIG_SELECTS = 1 from VB(legacy ASP) with ADODB environment?

    - by conecon
    I encountered The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT is okay error with my ASP code. ASP code has server side ADODB connection with MySQL and connection seems not be able to execute multiple query. How to implement SQL_BIG_SELECTS = 1 in my code? Set obj_db = Server.CreateObject("ADODB.Connection") Session("ConnectionString") = "dsn=dsn1016189_mysql;uid=apns;pwd=mypassword;DATABASE=mydb;APP=ASP Script;STMT=SET CHARACTER SET SJIS" obj_db.Open Session("ConnectionString") Set obj_ret = Server.CreateObject("ADODB.Recordset") obj_ret.CursorLocation = 3 and executing SQL... SQL_BIG_SELECTS = 1; SELECT pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_admin, pu.attendance, pu.invited, pu.reason, qaa1.answer AS qaa1_answer, COUNT(pu2.p_login_id) AS companion FROM party_user pu LEFT OUTER JOIN party_user pu2 ON pu2.p_login_id = pu.login_id LEFT OUTER JOIN qa_answer qaa1 ON qaa1.login_id = pu.login_id AND qaa1.party_id = pu.party_id AND qaa1.sort_num = '1' WHERE pu.party_id = '92' AND pu.p_login_id = '' GROUP BY pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_admin, pu.attendance, pu.reason, qaa1.answer, pu.invited ORDER BY pu.login_id ASC; I can't execute multiple query and above query become error. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT pu.login_id, pu.p_login_id, pu.first_name, pu.last_name, pu.sex, pu.is_ad' at line 1

    Read the article

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco DMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: 1) What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? 2) Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • Dealing with Time Zones

    - by RavIncredible
    Hi All, I need some guidance with one of my project requirements, I am developing an application which has to deal with various time zones. Scenario: User 1 is from India, so his time zone would be GMT+05:30 User 2 is from UK, so his time zone would be GMT+01:00 If the User 1 sends a message to User 2, I want to show the Message Sent/Received Time as per the user’s time zone. For example User 1 sends a message at 6:30 Indian time, when User 2 would view the message it would show as 2:00 UK time. Here goes my question, whenever I save the message should I convert it to GMT+00, so all my base times stamps are the same and then later when I display the message, I convert it back to User specific time zone. Would this be complex? Is this the best way of doing this? I like to get views for both saving and displaying, also when I should do the time conversion from optimization point of view. I would need do deal with any/all timezones. I am developing this application with PHP and MySQL and I am aware of timezone conversion method come with both PHP and MySQL. I am just trying to figure out the best way of doing this. Look forward to have all valuable suggestions. Note : As of now I am not much worried with day/light savings. Thanks Ravi

    Read the article

  • Error in creating trigger using PHP script.. Please help!!!! Geeks

    - by Parth
    I've been successful in creating a simple trigger with a PHP script. The problem comes when I have to use "DELIMITER" in my trigger creating because I have nested if/then statements. For example, the following DOES work: $sql = 'CREATE TRIGGER `database`.`detail` BEFORE INSERT on `database`.`vibez_detail` FOR EACH ROW set @a=new.realtime_value'; $junk = mysqli_query($link, $sql); However, if I need to use "DELIMITER" I get errors galore. Here's what works if I go to the MySQL client at the server: DELIMITER $$; DROP TRIGGER `database`.`detail`$$ CREATE TRIGGER `database`.`detail` BEFORE INSERT on `database`.`vibez_detail` FOR EACH ROW BEGIN set new.data_rate = 69; insert into alarm_trips_parent values (null, new.data_rate, 6969, now()); set @tripped_time = now(); set @tripped:=true; end if; end if; END$$ DELIMITER ;$$ How can I construct the PHP code to get this to work? Seems the mysqli_query($link, $sql) chokes as soon as I put "DELIMITER" in the $sql variable. I get the following error when I attempt to do so: Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'DELIMITER //; CREATE TRIGGER `database`.`detail` BEFORE INS' at line 1 Even The concept for using mysqli_multi_query(); didn't worked for me.... :(

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >