Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 37/402 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • MySQL, PHP, and Apache errors while connecting to DB

    - by cypherdelton
    I'm having two errors when I test the "mysql_connect()" function in php. I just installed PHP and MySQL from scratch. These errors may be related, so I will post them both here: Warning: mysql_connect() [function.mysql-connect]: Headers and client library minor version mismatch. Headers:50158 Library:50518 in /usr/local/apache2/htdocs/index.php on line 6 Warning: mysql_connect() [function.mysql-connect]: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) in /usr/local/apache2/htdocs/index.php on line 6 Many websites say to start the mysql server if you are getting error #2. Whenever I execute the command mysqld start --user=mysql I get the error "mysqld: Too many arguments (first extra is 'start'); Adding the "&" to the end of the command makes no difference (I don't know if it is supposed to). Thanks, Cypher Delton

    Read the article

  • Automatic Backup and Restore MySQL db from Web Server to Local Machine

    - by Rhepungus
    I have a web app that is supposed to be run on a single local machine (kiosk display), but I want the option to let a user make changes on the web (from any PC) and update the instance on their local PC (kiosk display). From what i imagine, the MySQL instance on the web will just be replace the MySQL instance on the local (kiosk display) machine. Does anyone know of a way to do this? I am open to a product or coding it myself... I will appreciate any info or brainstorm. Thanks.

    Read the article

  • Codeigniter : database configuration

    - by nppCods
    I am beginner for CI so don’t have good knowledge. My problem is: I am not using any database. All the records are fetched from JSON. Therefore I don’t think I need to configure database… As said by CI, it doesn’t necessarily ask for database. So my database.php configuration is : $active_group = ‘default’; $active_record = FALSE; $db[‘default’][‘hostname’] = ‘’; $db[‘default’][‘username’] = ‘’; $db[‘default’][‘password’] = ‘’; $db[‘default’][‘database’] = ‘’; $db[‘default’][‘dbdriver’] = ‘mysql’; $db[‘default’][‘dbprefix’] = ‘’; $db[‘default’][‘pconnect’] = TRUE; $db[‘default’][‘db_debug’] = TRUE; $db[‘default’][‘cache_on’] = FALSE; $db[‘default’][‘cachedir’] = ‘’; $db[‘default’][‘char_set’] = ‘utf8’; $db[‘default’][‘dbcollat’] = ‘utf8_general_ci’; $db[‘default’][‘swap_pre’] = ‘’; $db[‘default’][‘autoinit’] = TRUE; $db[‘default’][‘stricton’] = FALSE; It works in my local system but doesn’t work in live server. I am so surprised. Then I provided hostname, username, password and dbdriver, it works. My question is that, is that necessary to provide all the details if I am not using database? Thank you for your suggestion.

    Read the article

  • Problem with user logins after db Restore

    - by JJgates
    I have two SQL 2005 instances that reside on different networks. I need to backup a database from instance A and restore it to a database in instance B on a weekly basis so that both databases hold the same data. After the restore, logins SIDS on database B are changed and therefore users can't log into database B and connection strings for the web application it supports are broken. Is there a work around for this? Thanks.

    Read the article

  • Dependency Injection: How to pass DB around?

    - by Stephane
    Edit: This is a conceptual question first and foremost. I can make applications work without knowing this, but I'm trying to learn the concept. I've seen lots of videos with related classes and that makes sense, but when it comes to classes wrapping around other classes, I can't seem to grasp where things should be instantiated/passed around. =-=-=-=-=-=-= Question: Let's say I have a simple page that loads data from a table, manipulates the result and displays it. Simple. I'm going to use '=' for instantiating a class and '-' for passing a class in using constructor injection. It seems to me that the database has to be passed from one end of the application to the other which doesn't seem right. Here's how I would do it if I wanted to separate concerns: index =>Controller =>Model Layer =>Database =>DAO->Database I have this rule in my head that says I'm not supposed to create objects inside other objects. So what do I do with the Database? Or even the Model for that matter? I'm obviously missing something so basic about this. I would love a simplified example so that I can move forward in my code. I feel really hamstrung by this.

    Read the article

  • issue with selecting 1 or 0 in mysql db

    - by jason
    I am having trouble with the following SQL statement i have had this issue before but I cant remember how i fixed the problem, I am guessing the issue with this is that mysql sees 0 as null ? Note I didnt show the first part of the Select statement as its irrelevant my sql works, its showing rows with toffline that are = to 1 as well as 0... FROM table1 AS tb1 LEFT JOIN table2 AS tb2 ON tb2.id = tb1.id2 LEFT JOIN table3 AS tb3 ON tb3.id = tb1.id3 AND tb1.toffline = 0 I have also tried AND NOT tb1.toffline = 1 also WHERE tb1.toffline = 0 all with the same result...

    Read the article

  • Synchronizing non-DB SQL Server objects

    - by DigDoug
    There are a number of tools available for synchronizing Tables, Indexes, Views, Stored Procedures and objects within a database. (We love RedGate here, and throw a lot of money their way). However, I'm having a very difficult time finding tools that will help with Jobs, Logins and Linked Servers. Do these things exist? Am I missing something obvious?

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Using Zend_Db and multiple tables

    - by Yacoby
    I have a normalized database that stores locations of files on the internet. A file may have multiple locations spread across different sites. I am storing the urls in two parts (Site.UrlStart, FileLocation.UrlEnd). The UrlEnd is the part unique to that file (for the site). Simplified Db structure: I am using Zend_Db as my ORM (If it is that), with classes for the tables inheriting from Zend_Db_Table_Abstract. The issue is that retrieving the location data (e.g. the url) requires the use of multiple tables and as far as I can make out and I would either have to use both table classes (thereby exposing my table structure) or scatter sql all over my application, neither of which sound appealing. The only solution I can see is to create a façade that acts like Zend_Db_Table_Abstract (Maybe inherits from it?) and hides the fact that the data is actually on two tables. My questions are as follows: Am I going in the right direction in creating a façade class (Are there other alternatives)? Should the facade class inherit from Zend_Db_Table_Abstract?

    Read the article

  • Is Zend Framework a total waste of my time?

    - by Citizen
    Ok, I'm about 50% done with the "30 minute" quickstart guide from Zend. I must be missing something, because this seems like a total waste of time. The point of this quickguide is to create a guestbook, something I could do in 5 minutes with regular naked non-framework php. Here's my path to zend framework: c:/program files/wamp/www/_zend/ Here's my path to my quickstart project: c:/program files/wamp/www/_zend/bin/quickstart/ I have a number of questions at this point: http://framework.zend.com/docs/quickstart/create-a-model-and-database-table 1: I'm running the command line to run my database loading script. I get an error stating the it can't find the Zend/AutoLoader.php becuase my path to the zend library is wrong. I followed all of the steps. I defined the path to my zend library in the main config file, but for some reason, its defined again in my db loader. In all of these scripts that they have me load, it points the relative path to the zend library as being /../library Problem is, there's nothing in that folder. To get to my actual zend folder, you'd need to be (relatively) /../../../../library Which brings me to my 2nd question: 2: Where the #$#$ is the main Zend files supposed to be? The install directions were basically "put it wherever you want", when the real answer (after a bunch of errors and wasted time was) was "put it somewhere so that its really easy to type the full path a thousand times in command line" and "it also better be in a runnable place on your webserver since its going to create your quickstart application in a subdirectory within zend". Which brings us to the third question 3: Am I supposed to have this libary in both the parent core Zend (wamp/_zend/library) AND my application (quickstart/library)? 4: If that is the case, it seems like a ton of wasted files to be uploading. I'd like to use Zend to create products that my customers will download. 5 megs of overhead seems like a bit much. Zend claims you can use these library components separately, but it looks to me like I'm going to have to upload them every time. Which leads to the next question: 5: It appears that perhaps Zend is more for a single application that is not supposed to be distributed. Is this not the case? 6: According to their default file structure everything but my /public folder would be above public_html on my server if I wanted this to rest on my TLD. I would need to rename every reference of /public/ to /public_html/, or am I missing something else?

    Read the article

  • Zend database query result converts column values to null

    - by David Zapata
    Hi again. I am using the next instructions to get some registers from my Database. Create the needed models (from the params module): $obj_paramtype_model = new Params_Model_DbTable_Paramtype(); $obj_param_model = new Params_Model_DbTable_Param(); Getting the available locales from the database // This returns a Zend_Db_Table_Row_Abstract class object $obj_paramtype = $obj_paramtype_model->getParamtypeByValue('available_locales'); // This is a query used to add conditions to the next sentence. This is executed from the Params_Model_DbTable_Param instance class, that depends from Params_Model_DbTable_Paramtype class (reference map and dependentTables arrays are fine in both classes) $obj_select = $this->select()->where('deleted_at IS NULL')->order('name'); // Execute the next query, applying the select restrictions. This returns a Zend_Db_Table_Rowset_Abstract class object. This means "Find Params by Paramtype" $obj_params_rowset = $obj_paramtype->findDependentRowset('Params_Model_DbTable_Param', 'Paramtype', $obj_paramtype); // Here the firebug log displays the queries.... Zend_Registry::get('log')->debug($obj_params_rowset); I have a profiler for all my DB executions from Zend. At this point the log and profiler objects (that includes Firebug writers), shows the executed SQL Queries, and the last line displays the resulting Zend_Db_Table_Rowset_Abstract class object. If I execute the SQL Queries in some MySQL Client, the results are as expected. But the Zend Firebug log writer displays as NULL the column values with latin characters (ñ). In other words, the external SQL client shows es_CO | Español de Colombia and en_US | English of United States but the Query results from Zend displays (and returns) es_CO | null and en_US | English of United States. I've deleted the ñ character from Español de Colombia and the query results are just fine in my Zend Log Firebug screen, and in the final Zend Form element. The MySQL database, tables and columns are in UTF-8 - utf8_unicode_ci collation. All my zend framework pages are in UTF-8 charset. I'm using XAMPP 1.7.1 (PHP 5.2.9, Apache at port 90 and MySQL 5.1.33-community) running on Windows 7 Ultimate; Zend Framework 1.10.1. I'm sorry if there is so much information, but I don't really know why could that happen, so I tryed to provide as much related information as I could to help to find some answer.

    Read the article

  • Sharepoint OLE DB - cannot insert records? "Field not updateable" error

    - by Pandincus
    I need to write a simple C# .NET application to retrieve, update, and insert some data in a Sharepoint list. I am NOT a Sharepoint developer, and I don't have control over our Sharepoint server. I would prefer not to have to develop this in a proper sharepoint development environment simply because I don't want to have to deploy my application on the Sharepoint server -- I'd rather just access data externally. Anyway, I found out that you can access Sharepoint data using OLE DB, and I tried it successfully using some ADO.NET: var db = DatabaseFactory.CreateDatabase(); DataSet ds = new DataSet(); using (var command = db.GetSqlStringCommand("SELECT * FROM List")) { db.LoadDataSet(command, ds, "List"); } The above works. However, when I try to insert: using (var command = db.GetSqlStringCommand("INSERT INTO List ([HeaderName], [Description], [Number]) VALUES ('Blah', 'Blah', 100)")) { db.ExecuteNonQuery(command); } I get this error: Cannot update 'HeaderName'; field not updateable. I did some Googling and apparently you cannot insert data through OLE DB! Does anyone know if there are some possible workarounds? I could try using Sharepoint Web Services, but I tried that initially and was having a heck of a time authenticating. Is that my only option?

    Read the article

  • postfix: Temporary lookup failure

    - by mk_89
    I have followed the tutorials step by step for installing and configuring postfix https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/PostfixBasicSetupHowto I am trying to test the services to whether Temporary lookup failure error telnet localhost 25 250 2.1.0 Ok rcpt to: fmaster@localhost 451 4.3.0 <fmaster@localhost>: Temporary lookup failure rcpt to: info@localhost 451 4.3.0 <info@localhost>: Temporary lookup failure I have tried searching the web but I have found no solutions, why am I getting this problem? mail.log Sep 24 01:03:05 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<root@localhost> to=<info@localhost> proto=ESMTP helo=<localhost> Sep 24 01:03:19 bookcdb postfix/smtpd[21055]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <root@localhost>: Temporary lookup failure; from=<root@localhost> to=<root@localhost> proto=ESMTP helo=<localhost> Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: timeout after RCPT from unknown[::1] Sep 24 01:08:19 bookcdb postfix/smtpd[21055]: disconnect from unknown[::1] Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:00:49 Sep 24 01:10:49 bookcdb postfix/anvil[21059]: statistics: max cache size 1 at Sep 24 01:00:49 Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:15:36 bookcdb postfix/smtpd[22175]: connect from unknown[::1] Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:15:55 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:15:59 bookcdb postfix/trivial-rewrite[22195]: warning: transport_maps lookup failure Sep 24 01:15:59 bookcdb postfix/smtpd[22175]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=ESMTP helo=<localhost> Sep 24 01:16:30 postfix/smtpd[22175]: last message repeated 5 times Sep 24 01:16:30 bookcdb postfix/smtpd[22175]: disconnect from unknown[::1] Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:15:36 Sep 24 01:19:50 bookcdb postfix/anvil[22177]: statistics: max cache size 1 at Sep 24 01:15:36 Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 01:20:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:20:32 bookcdb postfix/error[22402]: D0C596E0B34: to=<[email protected]>, relay=none, delay=5369, delays=5369/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:24:16 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:24:43 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Sep 24 01:25:14 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:25:14 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 01:25:32 bookcdb postfix/qmgr[21039]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 01:25:32 bookcdb postfix/error[22631]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=30203, delays=30203/0.01/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:32 bookcdb postfix/error[22632]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=30115, delays=30115/0.01/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: warning: non-SMTP command from unknown[::1]: subject: fdf Sep 24 01:25:58 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:26:01 bookcdb postfix/smtpd[22573]: connect from unknown[::1] Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "root@locahost" Sep 24 01:26:10 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:26:37 bookcdb postfix/trivial-rewrite[22594]: warning: transport_maps lookup failure Sep 24 01:26:37 bookcdb postfix/smtpd[22573]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@locahost> to=<fmaster@localhost> proto=SMTP Sep 24 01:26:45 bookcdb postfix/smtpd[22573]: disconnect from unknown[::1] Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:24:16 Sep 24 01:30:05 bookcdb postfix/anvil[22575]: statistics: max cache size 1 at Sep 24 01:24:16 Sep 24 01:34:57 bookcdb dovecot: master: Dovecot v2.0.19 starting up (core dumps disabled) Sep 24 01:35:02 bookcdb amavis[1009]: starting. /usr/sbin/amavisd-new at mail.bookcdb.com amavisd-new-2.6.5 (20110407), Unicode aware Sep 24 01:35:02 bookcdb amavis[1009]: Perl version 5.014002 Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: Group Not Defined. Defaulting to EGID '114 114' Sep 24 01:35:05 bookcdb amavis[1155]: Net::Server: User Not Defined. Defaulting to EUID '108' Sep 24 01:35:05 bookcdb amavis[1155]: Module Amavis::Conf 2.208 Sep 24 01:35:05 bookcdb amavis[1155]: Module Archive::Zip 1.30 Sep 24 01:35:05 bookcdb amavis[1155]: Module BerkeleyDB 0.49 Sep 24 01:35:05 bookcdb amavis[1155]: Module Compress::Zlib 2.033 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::TNEF 0.17 Sep 24 01:35:05 bookcdb amavis[1155]: Module Convert::UUlib 1.4 Sep 24 01:35:05 bookcdb amavis[1155]: Module Crypt::OpenSSL::RSA 0.27 Sep 24 01:35:05 bookcdb amavis[1155]: Module DB_File 1.821 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::MD5 2.51 Sep 24 01:35:05 bookcdb amavis[1155]: Module Digest::SHA 5.61 Sep 24 01:35:05 bookcdb amavis[1155]: Module IO::Socket::INET6 2.69 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Entity 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Parser 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module MIME::Tools 5.502 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Signer 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::DKIM::Verifier 0.39 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Header 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::Internet 2.08 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SPF v2.008 Sep 24 01:35:05 bookcdb amavis[1155]: Module Mail::SpamAssassin 3.003002 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::DNS 0.66 Sep 24 01:35:05 bookcdb amavis[1155]: Module Net::Server 0.99 Sep 24 01:35:05 bookcdb amavis[1155]: Module NetAddr::IP 4.058 Sep 24 01:35:05 bookcdb amavis[1155]: Module Socket6 0.23 Sep 24 01:35:05 bookcdb amavis[1155]: Module Time::HiRes 1.972101 Sep 24 01:35:05 bookcdb amavis[1155]: Module URI 1.59 Sep 24 01:35:05 bookcdb amavis[1155]: Module Unix::Syslog 1.1 Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::DB code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Amavis::Cache code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL base code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Log code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SQL::Quarantine NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::SQL code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Lookup::LDAP code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: AM.PDP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-in proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Courier proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: SMTP-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Pipe-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: BSMTP-out proto code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Local-out proto code loaded Sep 24 01:35:05 bookcdb amavis[1155]: OS_Fingerprint code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-VIRUS code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-EXT code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-C code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: ANTI-SPAM-SA code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Unpackers code loaded Sep 24 01:35:05 bookcdb amavis[1155]: DKIM code loaded Sep 24 01:35:05 bookcdb amavis[1155]: Tools code NOT loaded Sep 24 01:35:05 bookcdb amavis[1155]: Found $file at /usr/bin/file Sep 24 01:35:05 bookcdb amavis[1155]: No $altermime, not using it Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .mail Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .F Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .Z at /bin/uncompress Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .gz Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .bz2 at /bin/bzip2 -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lzo tried: lzop -d Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rpm tried: rpm2cpio.pl, rpm2cpio Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .cpio at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .tar at /bin/pax Sep 24 01:35:05 bookcdb amavis[1155]: Found decoder for .deb at /usr/bin/ar Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .zip Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .7z tried: 7zr, 7za, 7z Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .rar tried: unrar-free Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arj tried: arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .arc tried: nomarch, arc Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .zoo tried: zoo Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .lha Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .doc tried: ripole Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .cab tried: cabextract Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: Internal decoder for .tnef Sep 24 01:35:05 bookcdb amavis[1155]: No decoder for .exe tried: unrar-free; arj, unarj Sep 24 01:35:05 bookcdb amavis[1155]: Using primary internal av scanner code for ClamAV-clamd Sep 24 01:35:05 bookcdb amavis[1155]: Found secondary av scanner ClamAV-clamscan at /usr/bin/clamscan Sep 24 01:35:05 bookcdb amavis[1155]: Creating db in /var/lib/amavis/db/; BerkeleyDB 0.49, libdb 5.1 Sep 24 01:35:05 bookcdb postgrey[1219]: Process Backgrounded Sep 24 01:35:05 bookcdb postgrey[1219]: 2012/09/24-01:35:05 postgrey (type Net::Server::Multiplex) starting! pid(1219) Sep 24 01:35:05 bookcdb postgrey[1219]: Using default listen value of 128 Sep 24 01:35:05 bookcdb postgrey[1219]: Binding to TCP port 10023 on host localhost#012 Sep 24 01:35:05 bookcdb postgrey[1219]: Setting gid to "116 116" Sep 24 01:35:05 bookcdb postgrey[1219]: Setting uid to "110" Sep 24 01:35:06 bookcdb spamd[1231]: logger: removing stderr method Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server started on port 783/tcp (running version 3.3.2) Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server pid: 1233 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1238 Sep 24 01:35:08 bookcdb spamd[1233]: spamd: server successfully spawned child process, pid 1240 Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: SI Sep 24 01:35:08 bookcdb spamd[1233]: prefork: child states: II Sep 24 01:35:15 bookcdb postfix/master[1729]: daemon started -- version 2.9.3, configuration /etc/postfix Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:36:08 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: error: open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "*" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "root@localhost" Sep 24 01:36:51 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "fmaster@localhost" Sep 24 01:37:00 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:37:00 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<root@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:37:28 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2730, secured Sep 24 01:37:28 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=44/697 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2732, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=464/1303 Sep 24 01:37:29 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2734, secured Sep 24 01:37:29 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:31 bookcdb dovecot: imap-login: Login: user=<mkadiri89>, method=PLAIN, rip=::1, lip=::1, mpid=2737, secured Sep 24 01:37:31 bookcdb dovecot: imap(mkadiri89): Disconnected: Logged out bytes=117/1395 Sep 24 01:37:49 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2739, secured Sep 24 01:37:49 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:49 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:37:54 bookcdb dovecot: imap-login: Login: user=<root>, method=PLAIN, rip=::1, lip=::1, mpid=2741, secured Sep 24 01:37:54 bookcdb dovecot: imap: Error: user root: Invalid settings in userdb: userdb returned 0 as uid Sep 24 01:37:54 bookcdb dovecot: imap: Error: Invalid user settings. Refer to server log for more information. Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2743, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=44/697 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2745, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=464/1303 Sep 24 01:38:04 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2747, secured Sep 24 01:38:04 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 01:38:55 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: warning: hostname localhost does not resolve to address ::1: No address associated with hostname Sep 24 01:38:58 bookcdb postfix/smtpd[1995]: connect from unknown[::1] Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: hash:/etc/postfix/transport lookup error for "info@localhost" Sep 24 01:39:11 bookcdb postfix/trivial-rewrite[1999]: warning: transport_maps lookup failure Sep 24 01:39:37 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:39:47 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <fmaster@localhost>: Temporary lookup failure; from=<info@localhost> to=<fmaster@localhost> proto=SMTP Sep 24 01:40:13 bookcdb postfix/smtpd[1995]: NOQUEUE: reject: RCPT from unknown[::1]: 451 4.3.0 <info@localhost>: Temporary lookup failure; from=<info@localhost> to=<info@localhost> proto=SMTP Sep 24 01:43:08 bookcdb postfix/smtpd[1995]: disconnect from unknown[::1] Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection rate 1/60s for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max connection count 1 for (smtp:::1) at Sep 24 01:36:08 Sep 24 01:46:08 bookcdb postfix/anvil[1998]: statistics: max cache size 1 at Sep 24 01:36:08 Sep 24 01:48:05 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2805, secured Sep 24 01:48:05 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=85/681 Sep 24 01:51:30 bookcdb dovecot: imap-login: Login: user=<info>, method=PLAIN, rip=::1, lip=::1, mpid=2815, secured Sep 24 01:51:30 bookcdb dovecot: imap(info): Disconnected: Logged out bytes=117/1395 Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:05:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 02:05:15 bookcdb postfix/error[2842]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=8391, delays=8391/0.05/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:05:16 bookcdb postfix/error[2843]: E76996E09B2: to=<[email protected]>, relay=none, delay=8416, delays=8416/0.03/0/0.11, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 02:30:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:30:15 bookcdb postfix/error[2914]: D0C596E0B34: to=<[email protected]>, relay=none, delay=9551, delays=9551/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 02:35:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 02:35:15 bookcdb postfix/error[2926]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=34386, delays=34386/0.03/0/0.1, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 02:35:15 bookcdb postfix/error[2927]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=34299, delays=34298/0.02/0/0.12, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: 2EA006E0B32: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:15:15 bookcdb postfix/qmgr[1745]: E76996E09B2: from=<[email protected]>, size=439, nrcpt=1 (queue active) Sep 24 03:15:15 bookcdb postfix/error[3025]: 2EA006E0B32: to=<[email protected]>, relay=none, delay=12590, delays=12590/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:15:15 bookcdb postfix/error[3026]: E76996E09B2: to=<[email protected]>, relay=none, delay=12616, delays=12616/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: D0C596E0B34: from=<[email protected]>, size=442, nrcpt=1 (queue active) Sep 24 03:40:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:40:15 bookcdb postfix/error[3097]: D0C596E0B34: to=<[email protected]>, relay=none, delay=13752, delays=13752/0.01/0/0.07, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 2E5C36E0A07: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: warning: connect to transport private/smtp-amavis: No such file or directory Sep 24 03:45:15 bookcdb postfix/qmgr[1745]: 0EA3A6E0ACC: from=<[email protected]>, size=438, nrcpt=1 (queue active) Sep 24 03:45:15 bookcdb postfix/error[3129]: 2E5C36E0A07: to=<[email protected]>, orig_to=<root>, relay=none, delay=38586, delays=38586/0.01/0/0.09, dsn=4.3.0, status=deferred (mail transport unavailable) Sep 24 03:45:15 bookcdb postfix/error[3130]: 0EA3A6E0ACC: to=<[email protected]>, orig_to=<root>, relay=none, delay=38498, delays=38498/0.01/0/0.08, dsn=4.3.0, status=deferred (mail transport unavailable) postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = server1.bookcdb.com, bookcdb.com, localhost.bookcdb.com, localho st, bookcdb.com myhostname = server1.bookcdb.com mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relay_domains = lists.bookcdb.com relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,rejec t_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • What's the best practice for taking MySQL dump, encrypting it and then pushing to s3?

    - by HalogenCreative
    This current project requires that the DB be dumped, encrypted and pushed to s3. I'm wondering what might be some "best practices" for such a task. As of now I'm using a pretty straight ahead method but would like to have some better ideas where security is concerned. Here is the start of my script: mysqldump -u root --password="lepass" --all-databases --single-transaction > db.backup.sql tar -c db.backup.sql | openssl des3 -salt --passphrase foopass > db.backup.tarfile s3put backup/db.backup.tarfile db.backup.tarfile # Let's pull it down again and untar it for kicks s3get surgeryflow-backup/db/db.backup.tarfile db.backup.tarfile cat db.backup.tarfile | openssl des3 -d -salt --passphrase foopass |tar -xvj Obviously the problem is that this script everything an attacker would need to raise hell. Any thoughts, critiques and suggestions for this task will be appreciated.

    Read the article

  • Insert array to mysql database php

    - by ganjan
    Hi. I want to add an array to my db. I have set up a function that checks if a value in the db (ex. health and money) has changed. If the value is diffrent from the original I add the new value to the $db array. Like this $db['money'] = $money_input + $money_db;. function modify_user_info($conn, $money_input, $health_input){ (...) if ($result = $conn->query($query)) { while ($user = $result->fetch_assoc()) { $money_db = $user["money"]; $health_db = $user["health"]; } $result->close(); //lag array til db med kolonnene som skal fylles ut som keys i array if ($user["money"] != $money_input){ $db['money'] = $money_input + $money_db; //0 - 20 if (!preg_match("/^[[0-9]{0,20}$/i", $db['money'])){ echo "error"; return false; } } if ($user["health"] != $health_input){ $db['health'] = $health_input + $health_db; //0 - 4 if (!preg_match("/^[[0-9]{0,4}$/i", $db['health'])){ echo "error"; return false; } if (($db['health'] < 1) or ($db['health'] > 1000)) { echo "error"; return false; } } The keys in $db represent colums in my database. Now I want to make a function that takes the keys in the array $db and insert them in the db. Something like this ? $query = "INSERT INTO `main_log` ( `id` , "; foreach(range(0, x) as $num) { $query .= array_key.", "; } $query = substr($query, 0, -3); $query .= " VALUES ('', "; foreach(range(0, x) as $num) { $query .= array_value.", "; } $query = substr($query, 0, -3); $query .= ")";

    Read the article

  • Python: why does this code take forever (infinite loop?)

    - by Rosarch
    I'm developing an app in Google App Engine. One of my methods is taking never completing, which makes me think it's caught in an infinite loop. I've stared at it, but can't figure it out. Disclaimer: I'm using http://code.google.com/p/gaeunitlink text to run my tests. Perhaps it's acting oddly? This is the problematic function: def _traverseForwards(course, c_levels): ''' Looks forwards in the dependency graph ''' result = {'nodes': [], 'arcs': []} if c_levels == 0: return result model_arc_tails_with_course = set(_getListArcTailsWithCourse(course)) q_arc_heads = DependencyArcHead.all() for model_arc_head in q_arc_heads: for model_arc_tail in model_arc_tails_with_course: if model_arc_tail.key() in model_arc_head.tails: result['nodes'].append(model_arc_head.sink) result['arcs'].append(_makeArc(course, model_arc_head.sink)) # rec_result = _traverseForwards(model_arc_head.sink, c_levels - 1) # _extendResult(result, rec_result) return result Originally, I thought it might be a recursion error, but I commented out the recursion and the problem persists. If this function is called with c_levels = 0, it runs fine. The models it references: class Course(db.Model): dept_code = db.StringProperty() number = db.IntegerProperty() title = db.StringProperty() raw_pre_reqs = db.StringProperty(multiline=True) original_description = db.StringProperty() def getPreReqs(self): return pickle.loads(str(self.raw_pre_reqs)) def __repr__(self): return "%s %s: %s" % (self.dept_code, self.number, self.title) class DependencyArcTail(db.Model): ''' A list of courses that is a pre-req for something else ''' courses = db.ListProperty(db.Key) def equals(self, arcTail): for this_course in self.courses: if not (this_course in arcTail.courses): return False for other_course in arcTail.courses: if not (other_course in self.courses): return False return True class DependencyArcHead(db.Model): ''' Maintains a course, and a list of tails with that course as their sink ''' sink = db.ReferenceProperty() tails = db.ListProperty(db.Key) Utility functions it references: def _makeArc(source, sink): return {'source': source, 'sink': sink} def _getListArcTailsWithCourse(course): ''' returns a LIST, not SET there may be duplicate entries ''' q_arc_heads = DependencyArcHead.all() result = [] for arc_head in q_arc_heads: for key_arc_tail in arc_head.tails: model_arc_tail = db.get(key_arc_tail) if course.key() in model_arc_tail.courses: result.append(model_arc_tail) return result Am I missing something pretty obvious here, or is GAEUnit acting up?

    Read the article

  • Nashorn ?? JDBC ? Oracle DB ?????·?? 2

    - by Homma
    ???? Nashorn ?? JavaScript ??????? JDBC ? Oracle DB ?????????????????????????????????????JDBC ???????????????????????????????? ??????????????????????????? ????? URL ? https://blogs.oracle.com/nashorn_ja/entry/nashorn_jdbc_2 ??? ????????? JDBC ????? ??????????????? ${ORACLE_HOME}/jdbc/lib/ojdbc6.jar ??????????????????????????? JDBC ?????????????? ????? ojdbc6.jar ?????????????? ?????????????????????????????????????? var OracleDataSource = Java.type("oracle.jdbc.pool.OracleDataSource"); var ods = new OracleDataSource(); ods.setURL("jdbc:oracle:thin:test/test@dbsrv:1521:orcl"); var conn = ods.getConnection(); var meta = conn.getMetaData(); print("JDBC driver version is " + meta.getDriverVersion()); ???????? JDBC ?????? jjs ????? -cp ?????????????????????? $ jjs -cp ojdbc6.jar version.js JDBC driver version is 11.2.0.3.0 ????????????????????????????? JAR ????? Main Class ?????????????? ????JDBC ???????????????????????????????? ?????????java -jar ojdbc6.jar ?? JDBC ?????????????????? $ java -jar ojdbc6.jar Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6 on Thu_Jul_11_15:43:23_PDT_2013 #Default Connection Properties Resource #Fri May 30 10:37:32 JST 2014 ????? JAR ???????????? Main-Class ???????????? main(String[]) ??????????????????? Nashorn ?????????????? JAR ????? Main-Class ???? ?? Main-Class ???????????? jar ????? JAR ?????????META-INF/MANIFEST.MF ???????????????? Main-Class ??????? $ jar xf ojdbc6.jar $ grep Main-Class META-INF/MANIFEST.MF Main-Class: oracle.jdbc.OracleDriver Main-Class ? oracle.jdbc.OracleDriver ????????????? Main-Class ??? ??????? oracle.jdbc.OracleDriver ???? main ????? Nashorn ???????? $ jjs -cp ojdbc6.jar jjs> var OracleDriver = Java.type("oracle.jdbc.OracleDriver"); jjs> var StringArray = Java.type("java.lang.String[]"); jjs> OracleDriver.main(new StringArray(0)) Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6 on Thu_Jul_11_15:43:23_PDT_2013 #Default Connection Properties Resource #Fri May 30 10:55:22 JST 2014 null jjs> java ????? JAR ??????????????????????????? Java ???????????????? Java.type() ??????? JavaClass ????????????????????? ??? Nashorn ????????????? JDBC ? Oracle DB ????????JAR ????? Main-Class ???????????????? ??? Oracle DB ????? SQL ???????????????

    Read the article

  • index help for a MySQL query using greater-than operator and ORDER BY

    - by Jaymon
    I have a table with at least a couple million rows and a schema of all integers that looks roughly like this: start stop first_user_id second_user_id The rows get pulled using the following queries: SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N AND second_user_id=N ORDER BY start ASC SELECT * FROM tbl_name WHERE stop >= M AND first_user_id=N ORDER BY start ASC I cannot figure out the best indexes to speed up these queries. The problem seems to be the ORDER BY because when I take that out the queries are fast. I've tried all different types of indexes using the standard index format: ALTER TABLE tbl_name ADD INDEX index_name (index_col_1,index_col_2,...) And none of them seem to speed up the queries. Does anyone have any idea what index would work? Also, should I be trying a different type of index? I can't guarantee the uniqueness of each row so I've avoided UNIQUE indexes. Any guidance/help would be appreciated. Thanks!

    Read the article

  • Removing non-alphanumeric characters in an Access Field.

    - by Jacques Tardie
    I need to remove hyphens from a string in a large number of access fields. What's the best way to go about doing this? Currently, the entries are follow this general format: 2010-54-1 2010-56-1 etc. I'm trying to run append queries off of this field, but I'm always getting validation errors causing the query to fail. I think the cause of this failure is the hypens in the entries, which is why I need to remove them. I've googled, and I see that there are a number of formatting guides using vbscript, but I'm not sure how I can integrate vb into Access. It's new to me :) Thanks in advance, Jacques

    Read the article

  • Are document-oriented databases meant to replace relational databases?

    - by evolve
    Recently I've been working a little with MongoDB and I have to say I really like it. However it is a completely different type of database then I am used. I've noticed that it is most definitely better for certain types of data, however for heavily normalized databases it might not be the best choice. It appears to me however that it can completely take the place of just about any relational database you may have and in most cases perform better, which is mind boggling. This leads me to ask a few questions: Are document-oriented databases have been developed to be the next generation of databases and basically replace relational databases completely? Is it possible that projects would be better off using both a document-oriented database and a relational database side by side for various data which is better suited for one or the other? If document-oriented databases are not meant to replace relational databases, then does anyone have an example of a database structure which would absolutely be better off in a relational database (or vice-versa)?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >