Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 446/1302 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Error happening when running "rake db:create RAILS_ENV='development' "

    - by Dean
    Hi, I am getting this error in my terminal when i execute the command above, Deans-MacBook:depot dean$ rake db:create RAILS_ENV='development' (in /Users/dean/src/RailsBook/depot) Couldn't create database for {"username"=>"root", "adapter"=>"mysql", "database"=>"depot_development", "host"=>"localhost", "password"=>nil}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) In database config file i have the following: development: adapter: mysql database: depot_development username: root password: host: localhost I have the mysql gem installed and now i am unsure on what to do next. I am running snow leopard on a Macbook. Does anyone know why this error is happening? Thanks in Advance Dean

    Read the article

  • Auto switching databases from a rails app gracefully from the ApplicationController?

    - by Zaqintosh
    I've seen this post a few times, but haven't really found the answer to this specific question. I'd like to run a rails application that based on the detected request.host (imagine I have two subdomains points to the same rails app and server ip address: myapp1.domain.com and myapp2.domain.com). I'm trying to have myapp1 use the default "production" database, and myapp2 requests always use the alternative remote database. Here is an example of what I tried to do in Application controller that did not work: class ApplicationController < ActionController::Base helper :all before_filter :use_alternate_db private def use_alternate_db if request.host == 'myapp1.domain.com' regular_db elsif request.host == 'myapp2.domain.com' alternate_db end end def regular_db ActiveRecord::Base.establish_connection :production end def alternate_db ActiveRecord::Base.establish_connection( :adapter => 'mysql', :host => '...', :username => '...', :password => '...', :database => 'alternatedb' ) end end The problem is when it switches databases using this method, all connections (including valid sessions across the different subdomains get interrupted...). All examples online have people controlling database connectivity at the model level, but this would involve adding code all over my application. Is there some way to globally switch database connections on a per-request basis in the manner I'm suggesting above WITHOUT having to inject code all over my application? The added complexity here is I'm using Heroku as a hosting provider, so I have no control at the apache / rails application server level. I have looked at solutions like dbcharmer and magicmodels, but none seem to show examples of doing it in the manner that I'm trying to. Thanks for any help!

    Read the article

  • Sitecore - Rich Text Editor field is not saving information but instead just copying old information

    - by Younes
    We are using Sitecore.NET 5.3.1 (rev. 071114) and we found out a problem. When we are trying to change information in a Rich Text Editor field on the Master database and save the information, this information is not stored and instead the old information appears back into the RTE field. I have been trying this on the Web database on which this is not happening. However, changing this information on the web database feels useless because a publish will just change every information that does not correspond to the data in the Master database in which i just can't edit this field. So I'm having big trouble at this point since this is for one of our bigger customers and they really want this fixxed asap. We (Estate Internet) already have had an open ticket for this problem, but never got the solution. Hope that someone here knows what the problem may be.

    Read the article

  • How to provide custom connection string for Logging Application Block instead of using the one in .c

    - by Rory
    I'm modifying an existing winforms application to use the Logging Application Block. For historical reasons this app gets its main database connection string from the registry, and I'd like the Logging Application Block to use the same details for logging to the database. How can I do this? The approaches i can think of are: 1) Create a new TraceListener and implement the same sort of functionality as in FormattedDatabaseTraceListener. If I take this approach, should I inherit from CustomTraceListener, and if so how do I pass an attribute of the formatter to use? 2) Create a new ConfigurationSource that provides different details when asked for the database connection. All other requests would be passed through to a FileConfigurationSource, but when asked for the database connection details the object would read the appropriate bits from the registry instead. but it's not obvious which is more appropriate, or how to go about doing it. Any suggestions? I'm using EntLib 3.1. thanks, -Rory

    Read the article

  • sql server 2005 databse password recovery

    - by air
    i have one database in ms sql server 2005. i create this long time back, now want to modify it but i lost the password, i remember the user name for that database, is there any way to recover the password for that database or change its password ? Thanks

    Read the article

  • How to make a GRANT persist for a table that's being dropped and re-created?

    - by Eli Courtwright
    I'm on a fairly new project where we're still modifying the design of our Oracle 11g database tables. As such, we drop and re-create our tables fairly often to make sure that our table creation scripts work as expected whenever we make a change. Our database consists of 2 schemas. One schema has some tables with INSERT triggers which cause the data to sometimes be copied into tables in our second schema. This requires us to log into the database with an admin account such as sysdba and GRANT access to the first schema to the necessary tables on the second schema, e.g. GRANT ALL ON schema_two.SomeTable TO schema_one; Our problem is that every time we make a change to our database design and want to drop and re-create our database tables, the access we GRANT-ed to schema_one went away when the table was dropped. Thus, this creates another annoying step wherein we must log in with an admin account to re-GRANT the access every time one of these tables is dropped and re-created. This isn't a huge deal, but I'd love to eliminate as many steps as possible from our development and testing procedures. Is there any way to GRANT access to a table in such a way that the GRANT-ed permissions survive a table being dropped and then re-created? And if this isn't possible, then is there a better way to go about this?

    Read the article

  • Trouble with Berkeley DB JE Base API Secondary Databases and Sequences

    - by milosz
    I have a class Document which consists of Id (int) and Url (String). I would like to have a primary index on Id and secondary index on Url. I would also like to have a sequence for Id auto-incrementation. So I create a SecondaryDatabase and then I create a Sequence. During initialisation of the Sequence I get an exception: Exception in thread "main" java.lang.IllegalArgumentException at com.sleepycat.util.UtfOps.getCharLength(UtfOps.java:137) at com.sleepycat.util.UtfOps.bytesToString(UtfOps.java:259) at com.sleepycat.bind.tuple.TupleInput.readString(TupleInput.java:152) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:12) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:1) at com.sleepycat.bind.tuple.TupleBinding.entryToObject(TupleBinding.java:76) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.UrlKeyCreator.createSecondaryKey(UrlKeyCreator.java:20) at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.java:835) at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.java:42) at com.sleepycat.je.Database.notifyTriggers(Database.java:2004) at com.sleepycat.je.Cursor.putNotify(Cursor.java:1692) at com.sleepycat.je.Cursor.putInternal(Cursor.java:1616) at com.sleepycat.je.Cursor.putNoOverwrite(Cursor.java:663) at com.sleepycat.je.Sequence.<init>(Sequence.java:188) at com.sleepycat.je.Database.openSequence(Database.java:546) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyFullTextSearchEngine.init(MyFullTextSearchEngine.java:131) at pl.edu.mimuw.zbd.berkeley.zadanie.testy.MyFullTextSearchEngineTest.main(MyFullTextSearchEngineTest.java:18) It seems that during the initialisation of the sequence the secondary database is forced to update. When I debug the entryToObject method of MyDocumentBiding the bytes that it tries to convert to object seem random. What am I doing wrong?

    Read the article

  • Updating multiple related tables in SQLite with C#

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

  • SQLConnection.Open(); throwing exception

    - by flavour404
    Hi, Updating an old piece of software but in order to maintain backward compatibility I need to connect to a .mdb (access) database. I am using the following connection but keep getting an exception, why? I have validated the path, database existence etc. and that is all correct. string Server = "localhost"; string Database = drive + "\\btc2\\state\\states.mdb"; string Username = ""; string Password = "Lhotse"; string ConnectionString = "Data Source = " + Server + ";" + "Initial Catalog = " + Database + ";" + "User Id = '';" + "Password = " + Password + ";"; SqlConnection SQLConnection = new SqlConnection(); try { SQLConnection.ConnectionString = ConnectionString; SQLConnection.Open(); } catch (Exception Ex) { // Try to close the connection if (SQLConnection != null) SQLConnection.Dispose(); // //can't connect // // Stop here return false; }

    Read the article

  • Info on type family instances

    - by yairchu
    Intro: While checking out snoyman's "persistent" library I found myself wanting ghci's (or another tool) assistance in figuring out stuff. ghci's :info doesn't seem to work as nicely with type-families and data-families as it does with "plain" types: > :info Maybe data Maybe a = Nothing | Just a -- Defined in Data.Maybe ... > :info Persist.Key Potato -- "Key Potato" defined in example below data family Persist.Key val -- Defined in Database.Persist ... (no info on the structure/identity of the actual instance) One can always look for the instance in the source code, but sometimes it could be hard to find it and it may be hidden in template-haskell generated code etc. Code example: {-# LANGUAGE FlexibleInstances, GeneralizedNewtypeDeriving, MultiParamTypeClasses, TypeFamilies, QuasiQuotes #-} import qualified Database.Persist as Persist import Database.Persist.Sqlite as PSqlite PSqlite.persistSqlite [$persist| Potato name String isTasty Bool luckyNumber Int UniqueId name |] What's going on in the code example above is that Template-Haskell is generating code for us here. All the extensions above except for QuasiQuotes are required because the generated code uses them. I found out what Persist.Key Potato is by doing: -- test.hs: test = PSqlite.persistSqlite [$persist| ... -- ghci: > :l test.hs > import Language.Haskell.TH > import Data.List > runQ test >>= putStrLn . unlines . filter (isInfixOf "Key Potato") . lines . pprint where newtype Database.Persist.Key Potato = PotatoId Int64 type PotatoId = Database.Persist.Key Potato Question: Is there an easier way to get information on instances of type families and data families, using ghci or any other tool?

    Read the article

  • Application Context in Rails

    - by Sean McMains
    Rails comes with a handy session hash into which we can cram stuff to our heart's content. I would, however, like something like ASP's application context, which instead of sharing data only within a single session, will share it with all sessions in the same application. I'm writing a simple dashboard app, and would like to pull data every 5 minutes, rather than every 5 minutes for each session. I could, of course, store the cache update times in a database, but so far haven't needed to set up a database for this app, and would love to avoid that dependency if possible. So, is there any way to get (or simulate) this sort of thing? If there's no way to do it without a database, is there any kind of "fake" database engine that comes with Rails, runs in memory, but doesn't bother persisting data between restarts?

    Read the article

  • XML Schema (XSD) to Rails ActiveRecord Mapping?

    - by Incomethax
    I'm looking for a way to convert an XML Schema definition file into an ActiveRecord modeled database. Does anyone know of a tool that happens to do this? So far the best way I've found is to first load the XSD into an RDBMS like postgres or mysql and then have rails connect to do a rake db:schema:dump. This however, only leaves me with a database without rails Models. What would be the best way to import/load this xsd based database into rails?

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • CakePHP Test Fixtures Drop My Tables Permanently After Running A Test Case

    - by Frank
    I'm not sure what I've done wrong in my CakePHP unit test configuration. Every time I run a test case, the model tables associated with my fixtures are missing form my test database. After running an individual test case I have to re-import my database tables using phpMyAdmin. Here are the relevant files: This is the class I'm trying to test comment.php. This table is dropped after the test. App::import('Sanitize'); class Comment extends AppModel{ public $name = 'Comment'; public $actsAs = array('Tree'); public $belongsTo = array('User' => array('fields'=>array('id', 'username'))); public $validate = array( 'text' = array( 'rule' =array('between', 1, 4000), 'required' ='true', 'allowEmpty'='false', 'message' = "You can't leave your comment text empty!") ); database.php class DATABASE_CONFIG { var $default = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'projectdb', 'prefix' = '' ); var $test = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'testprojectdb', 'prefix' = '' ); } My comment.test.php file. This is the table that keeps getting dropped. <?php App::import('Model', 'Comment'); class CommentTestCase extends CakeTestCase { public $fixtures = array('app.comment', 'app.user'); function start(){ $this-Comment =& ClassRegistry::init('Comment'); $this-Comment-useDbConfig = 'test_suite'; } This is my comment_fixture.php class: <?php class CommentFixture extends CakeTestFixture { var $name = "Comment"; var $import = 'Comment'; } And just in case, here is a typical test method in the CommentTestCase class function testMsgNotificationUserComment(){ $user_id = '1'; $submission_id = '1'; $parent_id = $this-Comment-commentOnModel('Submission', $submission_id, '0', $user_id, "Says: A"); $other_user_id = '2'; $msg_id = $this-Comment-commentOnModel('Submission', $submission_id, $parent_id, $other_user_id, "Says: B"); $expected = array(array('Comment'=array('id'=$msg_id, 'text'="Says: B", 'submission_id'=$submission_id, 'topic_id'='0', 'ack'='0'))); $result = $this-Comment-getMessages($user_id); $this-assertEqual($result, $expected); } I've been dealing with this for a day now and I'm starting to be put off by CakePHP's unit testing. In addition to this issue -- Servral times now I've had data inserted into by 'default' database configuration after running tests! What's going on with my configuration?!

    Read the article

  • Problem: Sorting for GridView/ObjectDataSource changes depending on page

    - by user148298
    I have a GridView tied to an ObjectDataSource using paging. The paging works fine, except that the sort order changes depending on which page of the results is being viewed. This causes items to reappear on subsequent pages among other issues. I traced the problem to my DAL, which reads a page at a time and then sorts it. Obviously the sorting is going to change as the result set size changes. Is there an improvement to this algorithm. I would like to use a datareader if possible: [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public static WordsCollection LoadForCriteria(string sqlCriteria, int maximumRows, int startRowIndex, string sortExpression) { //DEFAULT SORT EXPRESSION if (string.IsNullOrEmpty(sortExpression)) sortExpression = "OrderBy"; //CREATE THE DYNAMIC SQL TO LOAD OBJECT StringBuilder selectQuery = new StringBuilder(); selectQuery.Append("SELECT"); if (maximumRows > 0) selectQuery.Append(" TOP " + (startRowIndex + maximumRows).ToString()); selectQuery.Append(" " + Words.GetColumnNames(string.Empty)); selectQuery.Append(" FROM sw_Words"); string whereClause = string.IsNullOrEmpty(sqlCriteria) ? string.Empty : " WHERE " + sqlCriteria; selectQuery.Append(whereClause); selectQuery.Append(" ORDER BY " + sortExpression); Database database = Token.Instance.Database; DbCommand selectCommand = database.GetSqlStringCommand(selectQuery.ToString()); //EXECUTE THE COMMAND WordsCollection results = new WordsCollection(); int thisIndex = 0; int rowCount = 0; using (IDataReader dr = database.ExecuteReader(selectCommand)) { while (dr.Read() && ((maximumRows < 1) || (rowCount < maximumRows))) { if (thisIndex >= startRowIndex) { Words varWords = new Words(); Words.LoadDataReader(varWords, dr); results.Add(varWords); rowCount++; } thisIndex++; } dr.Close(); } return results; }

    Read the article

  • how to udate window controls(NSTextField,NSCheckbox and etc) in binding manually

    - by Amit
    Hi, I am working on an application in which i need to store all the NSObject subclass properties into plist file and then allow users to store it and restore it. We call it profile and it can restore the saved state of all the controls/views on the window in my application. I have completed the storing/Restoring part, but the issue is when i am updating the class properties manually, it is not updating the control state Like checkboxs and others which is bind with the class property. Please let me know how can i update the controls state, if its KVC/KVO updated programatically. Thanks in advance

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • Best Installation Software?

    - by Chris
    I am interested in knowing what the best software would be to build an installation package that performs the following: (1) Installs client application (2) Detects all SQL server instances on network, allowing user to select specific database to upgrade (which would then upgrade database using an embedded SQL script) (3) Installs website on a server/location specified by user, and configures IIS 6.0 and/or 7.0 based on settings that I specify. (4) Creates a simple setup.exe - and allows user to choose installation components (listed above, i.e install client app, sql server database, and/or website), and then download selected components from remove server. I have tried NSIS - as was able to create an installation package that will download a compressed (gzip) component from a remote server, decompress the file, install the components, and then remove the gzip file. So, this worked beautifully. The part where I am stuck is to be able to perform the database upgrade and website install. Any suggestions would be great. Thanks. Chris

    Read the article

  • openquery issue in SQL Server

    - by George2
    Hello everyone, I am using SQL Server 2008 (let us call this source database server in this question discussion), and in SSMS, I have created a linked server to another SQL Server 2008 database (let us call this destination database server in this question discussion). When I issue statement -- select * from [linked server name].[database name].[dbo].[table name], error will be returned, Linked server "ZS" The OLE DB access interface "SQLNCLI10" returned "NON-CLUSTERED and NOT INTEGRATED "Index" ix_foo_basic_info_nf ", which is incorrect bookmark ordinal 0. When I issue statement -- select * from openquery([linked server name],'select * from [table name]'), there will be no errors, any ideas what is wrong? thanks in advance, George

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • How do I connect to mysql from php ?

    - by roberto
    Hi guys. I'm working through examples from a book on php/mysql development. I'm working on a linux/apache environment. I've set up a database and a user. I attempt to connect with this line of code: $db_server = mysql_connect($db_hostname, $db_username, $db_password); I get this error: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'www-data'@'localhost' (using password: YES) in /var/www/hosts/dj/connect.php on line 3 unable to connect to database: Access denied for user 'www-data'@'localhost' (using password: YES) I can only guess what is happening here: I think www-data is a username for apache. Upon the database connection, the credentials being passed in to mysql are not those of my database user, but rather apache's own credentials. Is that what is happening here? How do I pass in the credentials I've defined for my user ?

    Read the article

  • Can't connect SQL 2008 Express to Access project

    - by Gerhard
    I just installed SQL 2008 Express and want to create an Access project (ADP). When I get to the Microsoft SQL Server Database Wizard in Access and after clicking on next to create the database I get this message: The new database wizard does not work with the version of Microsoft SQL Server to which you Access project is connected. See the Microsoft Update Web site for the latest information an downloads I can't find any solution to the problem so far. Any ideas why and how to solve this problem?

    Read the article

  • Autocomplete and Dynamic Parameter Passing

    - by abcParsing
    The code below works fine using jQuery UI 1.8 and jQuery 1.4.2 $("#sid_entry_box").autocomplete( {source:"autocomplete_sid.php?database="+database_name, minLength:4, delay:1000, enable:true, cacheLength:1 }); The database name is passed as a get parameter of the php call. In this application, I have two databases selected by a radio button. Since jQuery loads and assigns this function when the document is loaded, the database name is whatever was checked at that momemnt. What I really need to pass to the php call is the following: database=$("input[name=rf_database_option]:checked").val(); Is ther ean easy to understand way to be able to pass a dynamic dom value?

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >