Search Results

Search found 17634 results on 706 pages for 'django multi db'.

Page 564/706 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Problems using jQuery $.ajax to pass data

    - by iboeno
    I'm using ASP.NET and attempting to call a method with a signature of [WebMethod] public static string GetInfo(string id){...} using the following javascript: var elementValue = $("#element").attr('id'); var d = "{id : " + elementValue + "}"; $.ajax({ type: "POST", url: "../WebPage.aspx/GetInfo", data: d, contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { //do this } }); And this is not working. If instead I set elementValue = 2; it works fine. If I try to hardcode in a string value for testing purposes e.g. elementValue = "nameToLookUp"; It fails. Why is this happening, and how do I resolve it? On a side not, why is type: required to be POST instead of a GET? In the end I just want to pass a string value I want to look up in a DB and retrieving some json data.

    Read the article

  • Approach to data wrapping

    - by Mikhail
    I'm developing in PHP and MySQL. The information about the currently logged in user is stored in many different tables. The information that I need on each page, I preload. However if something is needed from a rarely accessed table - then I do $newdata = $db->Query('SELECT * FROM rare_table WHERE user_id='.$user->id); I would like to simplify the above to a point where I don't have to specify that the query should be limited to this particular user. An ideal function call would be: $newdata = $user->Query('SELECT * FROM rare_table'); Obviously I'd have to parse the SQL and add a WHERE clause. Or add to the already existing clause. Questions: are there tools to do this? How can I develop this? Is this even a good idea?

    Read the article

  • jQuery: how to handle empty return from getJSON

    - by Gee
    Alright so I have a php script which gets results from a DB, and to get those results I'm using a jQuery script to pull the results via getJSON. It works perfectly but now I want to do something if the php script returns no results (empty). I tried: $.getJSON('path/to/script'), {parameter:parameter}, function(data){ if (data) { alert('Result'); } else { alert('Empty); } }); But it's no good. I've tried different things like if (data.length) but still nothing. I've noticed that if there is no returned data the callback will never fire at all. So if that's the case, how do I handle a empty return?

    Read the article

  • Call to a member function query() on a non-object

    - by Randy Gonzalez
    Ok, this is so weird!!! I am running PHP Version 5.1.6 when I try and run the code below it gives a fatal error of an object that has not been instantiated. As soon as I un-comment this line of code //$cb_db = new cb_db(USER, PASSWORD, NAME, HOST); everything works. Even though I have declared the $cb_db object as global within in the method. Any help would be greatly appreciated. require_once ( ROOT_CB_CLASSES . 'db.php'); $cb_db = new cb_db(USER, PASSWORD, NAME, HOST); class cb_user { protected function find_by_sql( $sql ) { global $cb_db; //$cb_db = new cb_db(USER, PASSWORD, NAME, HOST); $result_set = $cb_db->query( $sql ); $object_array = array(); while( $row = $cb_db->fetch_array( $result_set ) ) { $object_array[] = self::instantiate( $row ); } return $object_array; } }

    Read the article

  • rails howto compare datetime ?

    - by fenec
    hello, i have games in my sqLite DB with the attribute starting_date( t.date :starting_date). i would like to know all the games that have alreday started so i am using this lines of code: Game.find :all,:conditions=>"starting_date <= #{Date.today}" Game.find_by_sql("SELECT * FROM "games" WHERE (created_at < 2010-05-13)") the result is nill,even though i know that i have games that have already started like this one : #<Game id: 1, team_1_id: 2, team_2_id: 1, status: 2, team_1_points: nil, team_2_points: nil, starting_date: "2010-05-05", winner: 1, sport: "football", country: nil, league: "calcio", created_at: "2010-04-07 00:09:21", updated_at: "2010-05-13 00:57:19"> what am i doing wrong here?

    Read the article

  • Post High Score and Retrieve Position

    - by majman
    I'm not so savvy with MYSQL, so my apologies in advance is this is a dumb question. I've created a super basic PHP High Scores table. Upon inserting a new score into the DB Table, I'd like to retrieve the position of that score so that I can get 10 results with the persons score falling within that range. My INSERT Query looks something like: $stmt = $mysqli->prepare("INSERT INTO highscores (name, time, score) VALUES (?, ?, ?)"); $stmt->bind_param('sdi', $name, $time, $score); UPDATE - I'm looking for a way to do this with as few queries as possible. I recall reading something about getting an INSERT ID when making an insert, but I would then still have to make a second query to get those results.

    Read the article

  • Fluent NHibernate + multiple databases

    - by Pablote
    My project needs to handle three databases, that means three session factories. The thing is if i do something like this with fluent nhibernate: .Mappings(m = m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())) the factories would pick up all the mappings, even the ones that correspond to another database I've seen that when using automapping you can do something like this, and filter by namespace: .Mappings(m = m.AutoMappings.Add(AutoMap.AssemblyOf().Where(t = t.Namespace == "Storefront.Entities"))) I havent found anything like this for fluent mappings, is it possible?? The only solutions I can think of are: either create separate assemblies for each db mapping classes or explicitly adding each of the entities to the factory configuration. I would prefer to avoid both, if possible. Thanks.

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Read and write .NET Objects in SQL Database without serialization.

    - by Mohit
    Hello, I have a small query. I need to create a Caching Service of my own that will write and read .NET Objects to and from the Database. Now, I have achieved that with the help of Binary Serialization. But the Problem is I need to deliberately marked my objects as [Serializable], which makes me think that what if someone will try to add an object which is not marked as [Serializable]. Thus, I need to find a way to read and write Objects to Database without Serialization. I have one thought too.. As we all know Session can store any object in it. Now, we can make sessions to be stored in the DB, outproc. What mechanism it uses to store these objects without serializing or deserializing. Any help will be highly appreciated. Thanks. M.B

    Read the article

  • Embed a database in the .apk of a distributed application [Android]

    - by Sephy
    Hi everybody, My question is I think quite simple but I don't think the answer will be... I have quite a lot of content to have in my application to make it run properly, and I'm thinking of putting all of it in a database, and distribute it in an embeded database with the application in the market. The only trouble is that I have no idea of how to do that. I know that I can extract a file .db from Eclipse DDMS with the content of my database, and I suppose I need to put it in the asset folder of my application, but then how to make the application use it to regenerate the application database? If you have any link to some code or help, that would be great. Thanks

    Read the article

  • How easy would it be to refactor a small JSP/Servlet/JDBC project to SpringMVC/Hibernate

    - by John
    With reference to this post, I am considering starting a new web-based Java project. Since I don't know Spring/Hibernate I was concerned if it's a bad plan to start learning them while creating a new project, especially since it will slow down the early development. One idea I had was to write a prototype using tech I do know, namely JSP/Servlets/JDBC, since I can get this running much quicker with my current knowledge. I could then throw the whole thing away and start over with Spring, etc, but I'd like to consider how easy it would be to refactor a smallish project from JSP/Servlets/JDB to SpringMVC/Hibernate? My DB could of course be re-used but what about other code... would I expect to save most of it plugged into an MVC framework, or is the paradigm shift big enough this would cause more trouble than it avoids? Please use the other question for more general advice on choosing technologies

    Read the article

  • MySQL - mysqldump --routines to only export 1 stored procedure (by name) and not every routine

    - by Joe Stein
    So we have a lot of routines that come out from exporting. We often need to get these out in CLI, make changes and bring them back in. Yes some of these are managed by different folks and a better change control is required but for now this is the situation. If I do mysqldump --routines --no-create-info --no-data --no-create-db then great I have 200 functions I need to go through a file to find just the one or set I want. Is there anyway to mysqldump routines that I want like there is for tables???

    Read the article

  • Unread email notifier, most practical approach

    - by Michael Pasqualone
    I'm in the process of writing a small php-cli script that will loop over over my personal inbox and then send me an SMS via a gateway. The question I have is: As will have the script launch via cron every 10 minutes, if there is an email sitting in my inbox that is not read before the next script launch then I will receive 2 sms. Does any one (pseudocode will do) have any idea what the best practice would be in php5 to ensure only 1 SMS is sent? What I am currently learning towards is towards storing the message ID in a sqlite DB and flagging a field whether an SMS has been sent or not - but wondering if there is an easier way?

    Read the article

  • Problem with NHibernate and saving.

    - by Vilx-
    When I do this: Cat x = Session.Load<Cat>(123); x.Name = "fritz"; Session.Flush(); NHibernate detects the change and UPDATEs the DB. But, when I do this: Cat x = new Cat(); Session.Save(x); x.Name = "fritz"; Session.Flush(); I get NULL for name, because that's what was there when I called Session.Save(). Why doesn't NHibernate detect the changes - or better yet, take the values for the INSERT statement at the time of Flush()?

    Read the article

  • Problems inserting file data into sqlite database using python

    - by tylerc230
    I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong?

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • VS 2010 Entity Repository Error

    - by Steve
    In my project I have it set up so that all the tables in the DB has the property "id" and then I have the entity objects inherit from the EntityBase class using a repository pattern. I then set the inheritance modifier for "id" property in the dbml file o/r designer to "overrides" Public MustInherit Class EntityBase MustOverride Property id() As Integer End Class Public MustInherit Class RepositoryBase(Of T As EntityBase) Protected _Db As New DataClasses1DataContext Public Function GetById(ByVal Id As Integer) As T Return (From a In _Db.GetTable(Of T)() Where a.id = Id).SingleOrDefault End Function End Class Partial Public Class Entity1 Inherits EntityBase End Class Public Class TestRepository Inherits RepositoryBase(Of Entity1) End Class the line Return (From a In _Db.GetTable(Of T)() Where a.id = Id).SingleOrDefault however produces the error "Class member EntityBase.id is unmapped" when i use VS 2010 using the 4.0 framework but I never received that error with the old one. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • Disable Primary Key and Re-Enable After SQL Bulk Insert

    - by Jon
    I am about to run a massive data insert into my DB. I have managed to work out how to enable and rebuild non-clustered indexes on my tables but I also want to disable/enable primary keys. You can't disable the clustered index for the primary key as the table is inaccessible when that is done and my attempt to do a ALTER TABLE for constraints does not work as I think that is only for foreign keys. Do you know of a way to Disable the Primary Key and Re-Enable After SQL Bulk Insert. NOTE: This is over numerous tables and so I don't know the exact primary key specifications eg/name etc

    Read the article

  • Python library to detect if a file has changed between different runs?

    - by Stefano Borini
    Suppose I have a program A. I run it, and performs some operation starting from a file foo.txt. Now A terminates. New run of A. It checks if the file foo.txt has changed. If the file has changed, A runs its operation again, otherwise, it quits. Does a library function/external library for this exists ? Of course it can be implemented with an md5 + a file/db containing the md5. I want to prevent reinventing the wheel.

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • How can I make an Access database readable from the web while it is open in MS Access?

    - by djdilicious
    I have a website using ASP with an MS Access DB back-end for storing mainly blog posts. My company has a very long software approval process so I am stuck with what I have (i.e. I must use Access). I use server-side javascript to retrieve posts stored in the database using OLEDB calls. Everything works fine except that I cannot read any tables from the database when it is open in the MS Access program. The page displays an error message about the file being in use. This could lead to significant downtime while I am doing any work within Access. How can I make the file readable by my ASP application while it is open in Access?

    Read the article

  • postgresql duplicate table names best practice

    - by veilig
    My company has a handful of apps that we deploy in the websites we build. Recently a very old app needed to be included along side a newer app and there was a conflict w/ a duplicate table name needed to be used by both apps. We are now in the process of updating an old app and there will be some DB updates. I'm curious what people consider best practice (or how do you do it) to help ensure these name collisions don't happen. I've looked at schema's but not sure if thats the right path we want to take. As the documentation prescribes, I don't want to "wire" a particular schema name into an application and if I add schema's to the user search path how would it know which table I was referring to if two schema's have the same table name. although, maybe I'm reading to much into this. Any insights or words of wisdom would be greatly appreciated!

    Read the article

  • Service Broker not working after database restore

    - by roryok
    Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box. Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON SELECT * FROM sys.service_queues gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc. Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out. Any ideas? I'm sure there's something really obvious I've missed...

    Read the article

  • Easy plugin or procedure for sqlserver Management Studio to script row inserts.

    - by Patrick Karcher
    I've never been able to find a good script or plugin for sql server Management Studio (2005 and or 2008) for a very common scripting need: specifying a few/all rows in a table and scripting their insert. You can guess my story: I've got some configuration data in my dev db and I need to script it for deployment to UAT and then production. I've found a few cludgy systems in the past, that were more trouble than they were worth. I need something free and unobtrusive. Once I find it I'll share it with the other 20 developers in my shop who are annoyed by this. Aren't we all annoyed by this by the way? What is the best, easiest, free, way to specify a few/all rows in a table and get a script their insert?

    Read the article

  • Codeigniter: simple form function

    - by Kevin Brown
    I'm stuck writing a simple form...I feel dumb. Here's my controller: function welcome_message(){ //Update welcome message $id = $this->session->userdata('id'); $profile['welcome_message'] = $this->input->post('welcome_message'); $this->db->update('be_user_profiles',$profile, array('user_id' => $id)); } And the html: <?php print form_open('home/welcome_message')?> <input type="checkbox" value="0" checked="false">Don't show me this again</input> <p> <input class="button submit" type="submit" class="close-box" value="Close" /> </p> <?php print form_close()?> Edit I simply need it to submit to a private function and return to the home page (page submitted from).

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >