Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 271/319 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • MySQL FULLTEXT not working

    - by Ross
    I'm attempting to add searching support for my PHP web app using MySQL's FULLTEXT indexes. I created a test table (using the MyISAM type, with a single text field a) and entered some sample data. Now if I'm right the following query should return both those rows: SELECT * FROM test WHERE MATCH(a) AGAINST('databases') However it returns none. I've done a bit of research and I'm doing everything right as far as I can tell - the table is a MyISAM table, the FULLTEXT indexes are set. I've tried running the query from the prompt and from phpMyAdmin, with no luck. Am I missing something crucial? UPDATE: Ok, while Cody's solution worked in my test case it doesn't seem to work on my actual table: CREATE TABLE IF NOT EXISTS `uploads` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` text NOT NULL, `size` int(11) NOT NULL, `type` text NOT NULL, `alias` text NOT NULL, `md5sum` text NOT NULL, `uploaded` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ; And the data I'm using: INSERT INTO `uploads` (`id`, `name`, `size`, `type`, `alias`, `md5sum`, `uploaded`) VALUES (1, '04 Sickman.mp3', 5261182, 'audio/mp3', '1', 'df2eb6a360fbfa8e0c9893aadc2289de', '2009-07-14 16:08:02'), (2, '07 Dirt.mp3', 5056435, 'audio/mp3', '2', 'edcb873a75c94b5d0368681e4bd9ca41', '2009-07-14 16:08:08'), (3, 'header_bg2.png', 16765, 'image/png', '3', '5bc5cb5c45c7fa329dc881a8476a2af6', '2009-07-14 16:08:30'), (4, 'page_top_right2.png', 5299, 'image/png', '4', '53ea39f826b7c7aeba11060c0d8f4e81', '2009-07-14 16:08:37'), (5, 'todo.txt', 392, 'text/plain', '5', '7ee46db77d1b98b145c9a95444d8dc67', '2009-07-14 16:08:46'); The query I'm now running is: SELECT * FROM `uploads` WHERE MATCH(name) AGAINST ('header' IN BOOLEAN MODE) Which should return row 3, header_bg2.png. Instead I get another empty result set. My options for boolean searching are below: mysql> show variables like 'ft_%'; +--------------------------+----------------+ | Variable_name | Value | +--------------------------+----------------+ | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | +--------------------------+----------------+ 5 rows in set (0.02 sec) "header" is within the word length restrictions and I doubt it's a stop word (I'm not sure how to get the list). Any ideas?

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • One config file for multiple environments

    - by ho
    I'm currently working with systems that has quite a lot of configuration settings that are environment specific (Dev, UAT, Production). Does anyone have any suggestions for minimizing the changes needed to the config file when moving between environments as well as minimizing the duplication of data in the config file? It's mostly Application settings rather than User settings. The way I'm doing it at the moment is something similar to this: <DevConnectionString>xyz</DevConnectionString> <DevInboundPath>xyz</DevInboundPath> <DevProcessedPath>xyz</DevProcessedPath> <UatConnectionString>xyz</UatConnectionString> <UatInboundPath>xyz</UatInboundPath> <UatProcessedPath>xyz</UatProcessedPath> ... <Environment>Dev</Environment> And then I have a class that reads in the Environment setting via the My.Settings class (it's VB project) and then uses that to decide what other settings to retrieve. This leads to too much duplication though so I'm not sure if it's worth it.

    Read the article

  • Play framework does not return page and static content

    - by Anton
    I'm using play framework in production for one of my web projects. From time to time Play does not render main page or does not return some of the static content files. I have attached few screenshots below. First screenshot displays firebug console, loading of the site is stucked at the beginning, when serving home page. Second screenshot display fiddler console, when 2 static resources are not loading. This issue is hard to reproduce, it happens 1 of 15 time, I have to delete cache data and reload page. (pressing CRTL-F5 in FF). Issue can be reproduced in most of the browsers. Initially, I was thinging that there is something wrong with hosting provider. But I have changed hosting provided and issue has not gone. Version of the play is 1.2.2. Play is running as standalone server. Not sure, but probably deploying Play to Jetty/Tomcat/Resin would help. Also I'm thinging about rewriting application to another stack (well-known for me - j2ee, spring, whatever) I have no idea how to debug and resolve this issue. Any clue ? Has anyone faced same issue with Play before ?

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Zend Framework: Zend_DB Error

    - by Sergio E.
    I'm trying to learn ZF, but got strange error after 20 minutes :) Fatal error: Uncaught exception 'Zend_Db_Adapter_Exception' with message 'Configuration array must have a key for 'dbname' that names the database instance' What does this error mean? I got DB information in my config file: resources.db.adapter=pdo_mysql resources.db.host=localhost resources.db.username=name resources.db.password=pass resources.db.dbname=name Any suggestions? EDIT: This is my model file /app/models/DbTable/Bands.php: class Model_DbTable_Bands extends Zend_Db_Table_Abstract { protected $_name = 'zend_bands'; } Index controller action: public function indexAction() { $albums = new Model_DbTable_Bands(); $this->view->albums = $albums->fetchAll(); } EDIT All codes: bootstrap.php protected function _initAutoload() { $autoloader = new Zend_Application_Module_Autoloader(array( 'namespace' => '', 'basePath' => dirname(__FILE__), )); return $autoloader; } protected function _initDoctype() { $this->bootstrap('view'); $view = $this->getResource('view'); $view->doctype('XHTML1_STRICT'); } public static function setupDatabase() { $config = self::$registry->configuration; $db = Zend_Db::factory($config->db); $db->query("SET NAMES 'utf8'"); self::$registry->database = $db; Zend_Db_Table::setDefaultAdapter($db); Zend_Db_Table_Abstract::setDefaultAdapter($db); } IndexController.php class IndexController extends Zend_Controller_Action { public function init() { /* Initialize action controller here */ } public function indexAction() { $albums = new Model_DbTable_Bands(); $this->view->albums = $albums->fetchAll(); } } configs/application.ini, changed database and provided password: [development : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 db.adapter = PDO_MYSQL db.params.host = localhost db.params.username = root db.params.password = pedro db.params.dbname = test models/DbTable/Bands.php class Model_DbTable_Bands extends Zend_Db_Table_Abstract { protected $_name = 'cakephp_bands'; public function getAlbum($id) { $id = (int)$id; $row = $this->fetchRow('id = ' . $id); if (!$row) { throw new Exception("Count not find row $id"); } return $row->toArray(); } }

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Selecting a whole database over an individual table to output to file

    - by Daniel Wrigley
    :::::::: EDIT :::::::: New code for people to have a look at, one question I have with this is where do I set were the *.gz file is saved? $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; system($command); Also why the hell can you not reply yo your own post with answering it? :( :::::::: EDIT :::::::: Ok Im having trouble finding out how to select a full database for backup as an *.sql file rather than only an individual table. On the localhost I have several databases with one named "foo" and it is that which I want to backup and not any of the individual tables inside the database "foo". The code to connect to the database; //Database Information $dbhost = "localhost"; $dbname = "foo"; $dbuser = "bar"; $dbpass = "rulz"; //Connect to database mysql_connect ($dbhost, $dbuser, $dbpass) or die("Could not connect: ".mysql_error()); mysql_select_db($dbname) or die(mysql_error()); The code to backup the database; // Grab the time to know when this post was submitted $time = date('Y-m-d-H-i-s'); $tableName = 'foo'; $backupFile = '/sql/backup/'. $time .'.sql'; $query = "SELECT * INTO OUTFILE '". $backupFile ."' FROM ". $tableName .""; $result = mysql_query($query)or die("Database query died: " . mysql_error()); My brain is hurting near to the end of the day so no doubts i've missed something out very obvious. Thanks in advance to anyone helping me out.

    Read the article

  • How to hand-over a TCP listening socket with minimal downtime?

    - by Shtééf
    While this question is tagged EventMachine, generic BSD-socket solutions in any language are much appreciated too. Some background: I have an application listening on a TCP socket. It is started and shut down with a regular System V style init script. My problem is that it needs some time to start up before it is ready to service the TCP socket. It's not too long, perhaps only 5 seconds, but that's 5 seconds too long when a restart needs to be performed during a workday. It's also crucial that existing connections remain open and are finished normally. Reasons for a restart of the application are patches, upgrades, and the like. I unfortunately find myself in the position that, every once in a while, I need to do this kind of thing in production. The question: I'm looking for a way to do a neat hand-over of the TCP listening socket, from one process to another, and as a result get only a split second of downtime. I'd like existing connections / sockets to remain open and finish processing in the old process, while the new process starts servicing new connectinos. Is there some proven method of doing this using BSD-sockets? (Bonus points for an EventMachine solution.) Are there perhaps open-source libraries out there implementing this, that I can use as is, or use as a reference? (Again, non-Ruby and non-EventMachine solutions are appreciated too!)

    Read the article

  • Advanced All In One .NET Framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Could splitting the requirements on 3 or 4 existing open source framework be possible ? Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • Rails Binary Stream support

    - by Craig Walker
    I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this.

    Read the article

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • Publishing a WCF Server and client and their endpoints

    - by Ahmadreza
    Imagine developing a WCF solution with two projects (WCF Service/ and web application as WCF Client). As long as I'm developing these two projects in visual studio and referencing service to client (Web Application) as server reference there is no problem. Visual studio automatically assign a port for WCF server and configure all needed configuration including Server And Client binging to something like this in server: <service behaviorConfiguration="DefaultServiceBehavior" name="MYWCFProject.MyService"> <endpoint address="" binding="wsHttpBinding" contract="MYWCFProject.IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8731/MyService.svc" /> </baseAddresses> </host> </service> and in client: <client> <endpoint address="http://localhost:8731/MyService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="MyWCFProject.IMyService" name="WSHttpBinding_IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> The problem is I want to frequently publish this two project in two different servers as my production servers and Service url will be "http://mywcfdomain/MyService.svc". I don't want to change config file every time I publish my server project. The question is: is there any feature in Visual Studio 2008 to automatically change the URLs or I have to define two different endpoints and I set them within my code (based on a parameter in my configuration for example Development/Published).

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >