Search Results

Search found 17634 results on 706 pages for 'django multi db'.

Page 564/706 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • MySQL - mysqldump --routines to only export 1 stored procedure (by name) and not every routine

    - by Joe Stein
    So we have a lot of routines that come out from exporting. We often need to get these out in CLI, make changes and bring them back in. Yes some of these are managed by different folks and a better change control is required but for now this is the situation. If I do mysqldump --routines --no-create-info --no-data --no-create-db then great I have 200 functions I need to go through a file to find just the one or set I want. Is there anyway to mysqldump routines that I want like there is for tables???

    Read the article

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÜnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • Resources for dashboard app backend design

    - by Nix
    I am looking for examples of code/data/infrastructure design for a dashboard-style webapp. I am designing an interface for staff and faculty at a university to access departmental resources and be alerted of cyclical processes, events, deadlines, etc. Technologies I am working with: apache tomcat 6 and a mySQL database, JSP (including JSTL), bootstrap 3, and javascript/jquery. I have basic experience most of these technologies building smaller web apps but was hoping someone could direct me towards a book or other resource that discusses how to design the db architecture (and maybe how to template) for a dashboard, esp. for something like a notification systems. Any suggestions?

    Read the article

  • Call to a member function query() on a non-object

    - by Randy Gonzalez
    Ok, this is so weird!!! I am running PHP Version 5.1.6 when I try and run the code below it gives a fatal error of an object that has not been instantiated. As soon as I un-comment this line of code //$cb_db = new cb_db(USER, PASSWORD, NAME, HOST); everything works. Even though I have declared the $cb_db object as global within in the method. Any help would be greatly appreciated. require_once ( ROOT_CB_CLASSES . 'db.php'); $cb_db = new cb_db(USER, PASSWORD, NAME, HOST); class cb_user { protected function find_by_sql( $sql ) { global $cb_db; //$cb_db = new cb_db(USER, PASSWORD, NAME, HOST); $result_set = $cb_db->query( $sql ); $object_array = array(); while( $row = $cb_db->fetch_array( $result_set ) ) { $object_array[] = self::instantiate( $row ); } return $object_array; } }

    Read the article

  • Approach to data wrapping

    - by Mikhail
    I'm developing in PHP and MySQL. The information about the currently logged in user is stored in many different tables. The information that I need on each page, I preload. However if something is needed from a rarely accessed table - then I do $newdata = $db->Query('SELECT * FROM rare_table WHERE user_id='.$user->id); I would like to simplify the above to a point where I don't have to specify that the query should be limited to this particular user. An ideal function call would be: $newdata = $user->Query('SELECT * FROM rare_table'); Obviously I'd have to parse the SQL and add a WHERE clause. Or add to the already existing clause. Questions: are there tools to do this? How can I develop this? Is this even a good idea?

    Read the article

  • why doesnt' nhibernate support this syntax ??

    - by ooo
    i have the following query and its failing in Nhibernate 3 LINQ witha a "Non supported" exception. My DB tables are: VacationRequest (id, personId) VacationRequestDate (id, vacationRequestId) Person (id, FirstName, LastName) My Entities are: VacationRequest (Person, IList) VacationRequestDate (VacationRequest, Date) Here is the query that is getting a "Non supported" Exception Session.Query<VacationRequestDate>().Where(r => people.Contains(r.VacationRequest.Person, new PersonComparer())).Fetch(r=>r.VacationRequest).ToList(); is there a better way to write this that would be supported in Nhibernate? fyi . .the PersonComparer just compared person.Id

    Read the article

  • How easy would it be to refactor a small JSP/Servlet/JDBC project to SpringMVC/Hibernate

    - by John
    With reference to this post, I am considering starting a new web-based Java project. Since I don't know Spring/Hibernate I was concerned if it's a bad plan to start learning them while creating a new project, especially since it will slow down the early development. One idea I had was to write a prototype using tech I do know, namely JSP/Servlets/JDBC, since I can get this running much quicker with my current knowledge. I could then throw the whole thing away and start over with Spring, etc, but I'd like to consider how easy it would be to refactor a smallish project from JSP/Servlets/JDB to SpringMVC/Hibernate? My DB could of course be re-used but what about other code... would I expect to save most of it plugged into an MVC framework, or is the paradigm shift big enough this would cause more trouble than it avoids? Please use the other question for more general advice on choosing technologies

    Read the article

  • AES Key encoded byte[] to String and back to byte[]

    - by Tom Brito
    In the similar question "Conversion of byte[] into a String and then back to a byte[]" is said to not to do the byte[] to String and back conversion, what looks like apply to most cases, mainly when you don't know the encoding used. But, in my case I'm trying to save to a DB the javax.crypto.SecretKey data, and recoverd it after. The interface provide a method getEncoded() which returns the key data encoded as byte[], and with another class I can use this byte[] to recover the key. So, the question is, how do I write the key bytes as String, and later get back the byte[] to regenerate the key?

    Read the article

  • Unread email notifier, most practical approach

    - by Michael Pasqualone
    I'm in the process of writing a small php-cli script that will loop over over my personal inbox and then send me an SMS via a gateway. The question I have is: As will have the script launch via cron every 10 minutes, if there is an email sitting in my inbox that is not read before the next script launch then I will receive 2 sms. Does any one (pseudocode will do) have any idea what the best practice would be in php5 to ensure only 1 SMS is sent? What I am currently learning towards is towards storing the message ID in a sqlite DB and flagging a field whether an SMS has been sent or not - but wondering if there is an easier way?

    Read the article

  • Embed a database in the .apk of a distributed application [Android]

    - by Sephy
    Hi everybody, My question is I think quite simple but I don't think the answer will be... I have quite a lot of content to have in my application to make it run properly, and I'm thinking of putting all of it in a database, and distribute it in an embeded database with the application in the market. The only trouble is that I have no idea of how to do that. I know that I can extract a file .db from Eclipse DDMS with the content of my database, and I suppose I need to put it in the asset folder of my application, but then how to make the application use it to regenerate the application database? If you have any link to some code or help, that would be great. Thanks

    Read the article

  • jQuery: how to handle empty return from getJSON

    - by Gee
    Alright so I have a php script which gets results from a DB, and to get those results I'm using a jQuery script to pull the results via getJSON. It works perfectly but now I want to do something if the php script returns no results (empty). I tried: $.getJSON('path/to/script'), {parameter:parameter}, function(data){ if (data) { alert('Result'); } else { alert('Empty); } }); But it's no good. I've tried different things like if (data.length) but still nothing. I've noticed that if there is no returned data the callback will never fire at all. So if that's the case, how do I handle a empty return?

    Read the article

  • Problems inserting file data into sqlite database using python

    - by tylerc230
    I'm trying to open an image file in python and add that data to an sqlite table. I created the table using: "CREATE TABLE "images" ("id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL , "description" VARCHAR, "image" BLOB );" I am trying to add the image to the db using: imageFile = open(imageName, 'rb') b = sqlite3.Binary(imageFile.read()) targetCursor.execute("INSERT INTO images (image) values(?)", (b,)) targetCursor.execute("SELECT id from images") for id in targetCursor: imageid= id[0] targetCursor.execute("INSERT INTO %s (questionID,imageID) values(?,?)" % table, (questionId, imageid)) When I print the value of 'b' it looks like binary data but when I call: 'select image from images where id = 1' I get '????' printed to the console. Anyone know what I'm doing wrong?

    Read the article

  • query excuting problem

    - by srini-r85
    hi, i tried to execute following query in php script. $db_selected = mysql_select_db("lumiinc1_sndemo1", $con); if ($db_selected) { echo "database connected"; } else { die ("Can\'t use db : " . mysql_error()); } $sql = "INSERT INTO `markers` ( `name`, `address`, `lat`, `lng`, `id` ) SELECT `name`, `street`, `latitude`, `longitude`, `lid` FROM `location` WHERE NOT EXISTS ( SELECT * FROM `markers` WHERE `location`.`lid` = `markers`.`id` )"; $result = mysql_query($sql); if ($result) { echo "Query executed OK"; } else { die("Invalid query: " . mysql_error()); } script does not show any error.also query executed.but i didn't get my expected result.at the same i try this query in phpmyAdmin i got my expected result. i dont know the cause of this problem. plz any one find the problem . thanks

    Read the article

  • [jquery] Appending to Second Last element

    - by Shishant
    Hello, This is the final output My HTML <li id='$id'>TEXT <ul class='indent'> <li id='$id'>TEXT</li> <li id='$id'>TEXT</li> <li class='formContainer'> FORM </li> </ul> </li> I want to append a li between form after all other li So in this example new li will be appended between Test141 APPEND Input Box The $id are db ids of li which are unique

    Read the article

  • Xquery get value from attribute

    - by Steven
    Hi, I have some xml and need to extract values using sql <?xml version="1.0" ?> <fields> <field name="fld_AccomAttic"> <value>0</value> </field> <field name="fld_AccomBathroom"> <value>1</value> </field> </fields> </xml> I need to get column name fld_AccomAttic Value 1 The xml is held in a sql server 2005 db I have used xquery before and it has worked. Can any one show me how to extract these values Im baffeled as to why i am unable to do this Thanks Sp

    Read the article

  • rails howto compare datetime ?

    - by fenec
    hello, i have games in my sqLite DB with the attribute starting_date( t.date :starting_date). i would like to know all the games that have alreday started so i am using this lines of code: Game.find :all,:conditions=>"starting_date <= #{Date.today}" Game.find_by_sql("SELECT * FROM "games" WHERE (created_at < 2010-05-13)") the result is nill,even though i know that i have games that have already started like this one : #<Game id: 1, team_1_id: 2, team_2_id: 1, status: 2, team_1_points: nil, team_2_points: nil, starting_date: "2010-05-05", winner: 1, sport: "football", country: nil, league: "calcio", created_at: "2010-04-07 00:09:21", updated_at: "2010-05-13 00:57:19"> what am i doing wrong here?

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

  • VS 2010 Entity Repository Error

    - by Steve
    In my project I have it set up so that all the tables in the DB has the property "id" and then I have the entity objects inherit from the EntityBase class using a repository pattern. I then set the inheritance modifier for "id" property in the dbml file o/r designer to "overrides" Public MustInherit Class EntityBase MustOverride Property id() As Integer End Class Public MustInherit Class RepositoryBase(Of T As EntityBase) Protected _Db As New DataClasses1DataContext Public Function GetById(ByVal Id As Integer) As T Return (From a In _Db.GetTable(Of T)() Where a.id = Id).SingleOrDefault End Function End Class Partial Public Class Entity1 Inherits EntityBase End Class Public Class TestRepository Inherits RepositoryBase(Of Entity1) End Class the line Return (From a In _Db.GetTable(Of T)() Where a.id = Id).SingleOrDefault however produces the error "Class member EntityBase.id is unmapped" when i use VS 2010 using the 4.0 framework but I never received that error with the old one. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • Read and write .NET Objects in SQL Database without serialization.

    - by Mohit
    Hello, I have a small query. I need to create a Caching Service of my own that will write and read .NET Objects to and from the Database. Now, I have achieved that with the help of Binary Serialization. But the Problem is I need to deliberately marked my objects as [Serializable], which makes me think that what if someone will try to add an object which is not marked as [Serializable]. Thus, I need to find a way to read and write Objects to Database without Serialization. I have one thought too.. As we all know Session can store any object in it. Now, we can make sessions to be stored in the DB, outproc. What mechanism it uses to store these objects without serializing or deserializing. Any help will be highly appreciated. Thanks. M.B

    Read the article

  • How to batch retrieve documents with mongoDB?

    - by edude05
    Hello everyone, I have an application that queries data from a mongoDB using the mongoDB C# driver something like this: public void main() { foreach (int i in listOfKey) { list.add(getObjectfromDB(i); } } public myObject getObjFromDb(int primaryKey) { document query = new document(); query["primKey"] = primaryKey; document result= mongo["myDatabase"]["myCollection"].findOne(query); return parseObject(result); } On my local (development) machine to get 100 object this way takes less than a second. However, I recently moved the database to a server on the internet, and this query takes about 30 seconds to execute for the same number of object. Furthermore, looking at the mongoDB log, it seems to open about 8-10 connections to the DB to perform this query. So what I'd like to do is have the query the database for an array of primaryKeys and get them all back at once, then do the parsing in a loop afterwards, using one connection if possible. How could I optimize my query to do so? Thanks, --Michael

    Read the article

  • SQL Compare-Like tool for Oracle?

    - by Hitchhiker
    We're a .NET team which uses the Oracle DB for a lot of reasons that I won't get into. But deployment has been a bitch. We are manually keeping track of all the changes to the schema in each version, by keeping a record of all the scripts that we run during development. Now, if a developer forgets to check-in his script to the source control after he ran it - which is not that rare - at the end of the iteration we get a great big headache. I hear that SQL Compare by Red-Gate might solve these kind of issues, but it only has SQL Server support. Anybody knows of a similar tool for Oracle? I've been unable to find one.

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • postgresql duplicate table names best practice

    - by veilig
    My company has a handful of apps that we deploy in the websites we build. Recently a very old app needed to be included along side a newer app and there was a conflict w/ a duplicate table name needed to be used by both apps. We are now in the process of updating an old app and there will be some DB updates. I'm curious what people consider best practice (or how do you do it) to help ensure these name collisions don't happen. I've looked at schema's but not sure if thats the right path we want to take. As the documentation prescribes, I don't want to "wire" a particular schema name into an application and if I add schema's to the user search path how would it know which table I was referring to if two schema's have the same table name. although, maybe I'm reading to much into this. Any insights or words of wisdom would be greatly appreciated!

    Read the article

  • Transfer Core Data from One Project to Another

    - by Michael
    The answer is probably a resounding 'NO' but before I start a new project from scratch, I thought I'd ask. I create many throw away projects to test ideas and code before combining all the successful bits from the scratch projects into a final version. So I have one project with the Core Data stuff worked out but I want to move it to a new project. My guess is that there is too many internal hooks and dropping in the .xcdatamodel and the sqlite db is just not going to work. I'd glad to be wrong...

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >