Search Results

Search found 30301 results on 1213 pages for 'content db'.

Page 63/1213 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Swiching webhosting company & database erros.

    - by gipap
    Well here comes the situation. I used to have CompanyA for webhosting. (The hosting plan was a shared one). I decided to change the hosting provider and transfer my website to CompanyB, (exclusive IP). The issue that i face is that my webpage is now displayed in two different IP addresses. So i decided to turn-off the website served by the CompanyA. Now the problem is that my database driven website, served by CompanyB, is not driven anymore, although i have added the A record mssql.mywebsite.com with the ipaddress of the database. (The database is served by dedicated db's server). So, what am i doing wrong here?

    Read the article

  • Use pt-table-sync to setup a new MySQL DB

    - by Generation D Systems
    I have 2 hosts (A and B). B contains a MySQL server with a database called mydb, and A contains a MySQL server with nothing (fresh install). I want to replicate the entire mydb from B to A, by running a script on A (I do not have shell access to B). Can I run this on A: pt-table-sync --execute h=b.mydomain.com,D=mydb h=a.mydomain.com I've read the docs but don't get a 100% comfort feeling (perhaps because of all the warnings about damaging your data if you don't know what you're doing). Will this work? as well, is h=a.mydomin.com necessary? (Will it route all traffic back in/out the local NIC?) can I use localhost or nothing at all?

    Read the article

  • Liferay and Oracle DB

    - by iamedu
    Hi! I'm installing liferay community edition with an Oracle database, I managed to get it running with the user SYSTEM, but I don't like this... I want to create another user in another tablespace, the problem is that it seems that liferay needs to create tables and alter them according during its lifetime. Do you know what permission and roles need to be assigned to the user? Thanks a lot in advance.

    Read the article

  • having trouble with bog standard openldap server db

    - by dingfelder
    I am having trouble getting an "out of the box" openldap server working. The examples on the openldap site stiull refer to the slapd.conf file, but the install does not make that. if I start the server (service slapd start) I do not get any errors, but cannot connect ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) anyone have a simple howto for v2.4 ? I am on fedora15, installed openldap-servers and clients via yum. I have phpldapadmin installed that I can try and connect with once I can get the command line working

    Read the article

  • How to create custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • forwarding port 3306 on macosx in order to connect to a remote mysql db

    - by Jonathan Mayhak
    I'm on macosx 10.6.2 trying to connect to ubuntu server 8.04.1 at linode. ssh -L 127.0.0.1:3306:[[remote ip]]:3306 user@server -N I want to set up ssh tunneling so that I can access a remote mysql server. First of all, I'm told bind: Address already in use. This is only after I've tried the command before. How do I manually close a port forwarding session? Second, when I change the command to be ssh -L 127.0.0.1:3310:[[remote ip]]:3306 user@server -N (I changed the local port to listen on). I'm told channel 1: open failed: connect failed: Connection refused when I try to connect to the mysql server via mysql workbench or sequel pro. To connect through mysql workbench I use the following settings: host: 127.0.0.1 port: 3310 (if 3306 is in use) username: mysql username password: mysql password database: I don't put anything in

    Read the article

  • MySQL: stopping just one DB to allow it to be moved

    - by DrStalker
    I want to do some work on the files that make up a few MySQL DBs (moving the files to a different partition and symlinking the original location to this) and if possible I'd like to shutdown just the database being moved, rather than shutting MySQL down altogether. Is there anyway in MySQL to do this, or will I need to do a full MySQL shutdown to be able to move the files?

    Read the article

  • Daily Weekly and Monthly DB backup with logrotate?

    - by benjisail
    Hi, I am currently keeping daily backup of my database by doing a daily mysqldump and by using logrotate to keep the 7 last days of mysqldump. I would like to improve this backup process to keep 7 daily backup, 3 weekly backups and 12 monthly backup. I found this article which explain how to di this with logrotate : http://www.hotcoding.com/os/sysadmin/35751.html However I am using the dateext logrotate option to name my backup files so I cannot use this solution. How can I do daily, weekly and monthly backup with logrotate and with the dateext option?

    Read the article

  • Uninstall Mongo DB completely

    - by Srikanth
    I followed the following steps to install MongoDb on my centos machine. http://andres.jaimes.net/876/setup-mongo-php-module-centos-6/ As mentioned at the end of the document, in the phpinfo() the mongoDb support was enabled. Now i need to undo all the actions i did. Till now i hve uninstalled remi-release-6.rpm which i had installed by following the link above. How to uninstall completely and undo all actions I did?

    Read the article

  • Table modifications while running db replication (MS SQL 2008)

    - by typemismatch
    I'm running SQL Server 2008 Std with a database that is being published in a "Transactional Publication" to a single subscriber. We are unable to make any changes to the tables on the publisher without getting the "cannot modify table because it is published for replication". This seems odd because schema changes (or scripts run to do this) should be pushed to the subscriber. We currently have to drop the entire publication system to make table changes. What am I missing? There must be a way to update the publisher tables? thanks!

    Read the article

  • Exchange 2010 remove Arbitration mailbox and mailbox store db

    - by JNM
    I have a problem with Exchange 2010 which is a nightmare for me. The problem is, that in Exchange management console i have several store databases in database management tab. only one is mounted, because i am using it. the second one is mounted, but it was used on other server before (now that server is dead). that database mounted status is UNKNOWN. The file of that database does not exist, but it still shows there. I can't remove it from management console, because it has mailboxes. i removed all mailboxes and disabled two arbitrary mailboxes. i can't delete it because i still have one arbitrary mailbox left. i can't move it, because it requires connection to dead server. i can't disable it, because i get error that it is the last one in organization. Can somebody help me? Solved it by using this command: Get-Mailbox -Arbitration -Database db1 | Remove-Mailbox -Arbitration -RemoveLastArbitrationMailboxAllowed Now i have another problem. Exchange management console shows public folder from different server which is dead now. That folder was copied here, but it is not needed anymore. Public folder file has been deleted, and records from ADSI edit has been removed too. But i can't remove that folder from management console. i get an error Exchange isn't able to check for public folder replicas for "My Public Folder Database". Anybody can help me with that?

    Read the article

  • Strange focus bug in Firefox (chrome vs content)

    - by Marius
    Here is a strange bug I'm experiencing in Firefox: I can only use either the chrome, or the content, not both at the same time! For example, I can click on tabs and the toolbar icons, focus the search bar and write in it as well as the address bar, but if I try to click on anything in the content (eg a link or a textfield to write something), then nothing happens. The mouse pointer doesn't change either, it just stays a pointer when I hover over things, and the links I hover don't react either. But if I alt-tab to another program (or click on it in the taskbar), then back to Firefox, then I can use the area that I click on. So if I click somewhere on the webpage to get focus back to Firefox, then I can click on links and write things (like this text), but I cannot click on tabs or refresh or anything else in the chrome. I can't even click on the minimize, restore and close icons! To get focus back on the chrome I have to alt-tab to another program, and then click on the chrome to get back to Firefox to be able to use the chrome again. I've tried closing and starting it again, but the bug is still there. I have experienced this before, but I don't remember what I did to fix it. This bug seems to occur sometimes when I wake up the computer from standby, but I leave by computer in standby all the time, so that is not the only factor.

    Read the article

  • rsyslog - template - regex data for insertion into db

    - by Mike Purcell
    I've been googling around the last few days looking for a solid example of how to regex a log entry for desired data, which is then to be inserted into a database, but apparently my google-fu is lacking. What I am trying to do is track when an email is sent, and then track the remote mta response, specifically the dsn code. At this point I have two templates setup for each situation: # /etc/rsyslog.conf ... $Template tpl_custom_header, "MPurcell: CUSTOM HEADER Template: %msg%\n" $Template tpl_response_dsn, "MPurcell: RESPONSE DSN Template: %msg%\n" # /etc/rsyslog.d/mail if $programname == 'mail-myapp' then /var/log/mail/myapp.log if ($programname == 'mail-myapp') and ($msg contains 'X-custom_header') then /var/log/mail/test.log;tpl_custom_header if ($programname == 'mail-myapp') and ($msg contains 'dsn=') then /var/log/mail/test.log;tpl_response_dsn & ~ Example log entries: MPurcell: CUSTOM HEADER Template: D921940A1A: prepend: header X-custom_header: 101 from localhost[127.0.0.1]; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost>: headername: message-id MPurcell: RESPONSE DSN Template: D921940A1A: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[2607:f8b0:400e:c02::1a]:25, delay=2, delays=0.12/0.01/0.82/1.1, dsn=2.0.0, status=sent (250 2.0.0 OK 1372378600 o4si2828280pac.279 - gsmtp) From the CUSTOM HEADER Template I would like to extract: D921940A1A, and X-custom_header value; 101 From the RESPONSE DSN Template I would like to extract: D921940A1A, and "dsn=2.0.0"

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • Direct DB to Web Server connection

    - by Joel Coel
    I have a database server sitting right underneath a virtual machine host server in the rack, and this vm host is primarily responsible for servers hosting a couple different web sites and app servers that all talk to databases on the other server. Right now both servers are connected to the same switch, and I'm pretty happy with the pathing. However, both servers also have an unused network port. I wondering about the potential benefits of using a short crossover or normal+auto mdix network cable to connect these two servers together directly. Is this a good idea, or would I be doing something that won't show much benefit and is just likely to trip up a future admin who's not looking for this? The biggest weakness I can see right now is that this would likely require a code change for each vm app to point to the new IP of the database server on this private little network, and if I have a problem with the virtual machine host and have to spin up it's guests elsewhere while I fix it I'll have to change this back before things will work.

    Read the article

  • Cannot to connect to a Cassandra DB from localhost

    - by DJYod
    Hello, I don't know if I'm on the right site, I installed OpenSolaris a single cassandra node, I don't have other node. On the same server, I install Ruby 1.8 with the gem Cassandra. If I try to connect from my computer to the Cassandra node through the ruby gem cassandra, I can connect perfectly, if I try to to the same from the ruby gem cassandra in the server, it says that there is no listening on 127.0.0.1. I can connect locally to the instance using telnet 127.0.0.1 9160 and it works... any idea? Thank you!

    Read the article

  • SQL Server Management Studio - Error connecting to remote DB

    - by Julien Poulin
    All right, here is the deal: I'm connecting to a Windows 2003 Server using VPN. On this server, there is a remote SQL Server 2005 Express engine. I can connect to the database using Visual Studio 2008. What I can't do though, is connect to this same database with SQL Server 2005 Management Studio (Standard). I have checked the connection info a hundred times and still nothing. One thought: do VS ans SSMS use the same sql provider? Note: I'm running Windows 7 RC. I had absolutely no problem using the same config under Vista. This is the error I get when trying to connect with SSMS:

    Read the article

  • How to use AND/OR Building Block content in a Word 2007 template

    - by JimmyJames
    I am creating a Schedule of Work template and am successfully using Developer Tab and Quick Parts to allow user to choose content on an "either/or" basis: either A; OR B; OR C; etc., essentially choose one option from many. One Building Block control, one paragraph, nice and clean. Now what I need to do but cannot seem to figure out, is how to allow user to choose content on an "and/or" basis: either A AND B; A OR B AND C; B AND D AND E OR F; etc., essentially choose several options from many on a variable basis. One Building Block control, maybe one paragraph, maybe three or more paragraphs. Not so clean. I thought of building choice options for all possible paragraph combinations, but I can have as many as 7 or 8 different paragraphs, and that solution quickly becomes unworkable. Multiple controls--some of which will be left unused doesn't work either, since I cannot find an easy way to have a "Choose or Delete" control that actually deletes if "Delete" is chosen. Recommendations are most welcome.

    Read the article

  • Cloning a NAS drive which hosts a SQL Server DB

    - by Adrian Hand
    We have a system in the field running a server application which is suffering with major performance issues. The system in question has 2 onboard 300gb sas drives in RAID 5 from which it boots Windows Server 2003, and a 6tb buffalo terastation NAS unit (also RAID 5) to which the server app does all of its reading and writing. I believe the terastation is the source of all our woes. Whilst under load, reads and writes tick by at something of the order of 1meg/sec, though the network in question is hardly utilised. The terastation contains various data, but crucially hosts a full instance worth of SQL Server .mdf and .ldf files (master etc - the whole shooting match) I wish to stop all the services on the server, then take everything on the terastation and essentially clone it to some alternative onboard storage, so as to eliminate the terastation from the equation as far as poor performance is concerned. ie the terastation is currently drive D: - I want to copy everything off and then have the duplicate assume the drive letter so that as far as the software is aware, nothing is different. This is tricky because of the mdf and ldf files - everything else will work with a straight up file copy. Can anyone suggest a means to achieve what I am describing? Many thanks!

    Read the article

  • Content not being compressed even though I'm using zlib in php.ini

    - by Tola Odejayi
    I've edited my php.ini file so that it has these two entries: zlib.output_compression = On zlib.output_compression_level = 4 However, after restarting apache, when I request php pages, the headers returned in the response indicate that my server is still NOT serving compressed pages (here are selected headers as viewed using Chrome's Network feature): Cache-Control:no-cache, must-revalidate, max-age=0 Connection:Keep-Alive Content-Type:text/html; charset=UTF-8 Date:Mon, 17 Sep 2012 23:46:13 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Mon, 17 Sep 2012 23:46:13 GMT Pragma:no-cache Proxy-Connection:Keep-Alive Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Via:1.1 XXX-PRXY-07 X-Powered-By:PHP/5.2.17 What might I be doing wrong? Is there any other setting that I need to change? EDIT Here is another set of headers returned to another computer: Cache-Control:no-cache, must-revalidate, max-age=0 Connection:close Content-Type:text/html; charset=UTF-8 Date:Thu, 20 Sep 2012 09:45:26 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Thu, 20 Sep 2012 09:45:26 GMT Pragma:no-cache Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Vary:Cookie X-Powered-By:PHP/5.2.17

    Read the article

  • Connect using sqlplus to db server through multiple tunnels

    - by Samuel Lindblom
    I would like to create an SQL Developer connection to a database through a couple of tunnels. The steps right now are: Connect to server A - connect to server B - run sqlplus against tnsname on a server that I do not have ssh access to. Is there an easy way of using SQL Developer instead of sqlplus? I have read through 20 articles on the subject and still have no idea how to actually make the connection. I understand that you can chain ssh -L commands to get the server connection, but I don't know how to use that connection in SQL Developer.

    Read the article

  • Daily Weekly and Monthly DB backup with logrotate?

    - by benjisail
    I am currently keeping daily backup of my database by doing a daily mysqldump and by using logrotate to keep the 7 last days of mysqldump. I would like to improve this backup process to keep 7 daily backup, 3 weekly backups and 12 monthly backup. I found this article which explain how to di this with logrotate : http://www.hotcoding.com/os/sysadmin/35751.html However I am using the dateext logrotate option to name my backup files so I cannot use this solution. How can I do daily, weekly and monthly backup with logrotate and with the dateext option?

    Read the article

  • Backing up MySQL DB wtih mixture of innodb and myisam tables

    - by madphp
    I have a large database (almost 1GB) and it has a mixture of innodb and myisam tables. Does anyone have any general tips when backing it up or more specifically the commands i should send to mysqldump. I see that i should lock myisam tables, and that single transactions for innodb, but what if i have both. Also, what is actually happening when i lock an entire (very big) table on a production database.

    Read the article

  • Backup SQL server db issue: delete old backup files

    - by David.Chu.ca
    I tried to use sqlmaint.exe tool to back up a database on a remote PC. Here is an example of backup: sqlmaint.exe -S remoteSQLServer\SQLInstance -U username -P pwdxxx -D myDB -BkUpMedia DISK -BkUpDB C:\MSSQL_Backups -DelBkUps 3days ... Here I specified to delete backups older than 3 days. However, the job seems not deleting old bak files on the remote PC(where the SQL server sits). The remote PC has Windows 2008 Server. I also set the C:\MSQL_Backups as shared network drive for EnyOne as owner. My understanding is that the job will delete any bak files older than 3 days. Not sure what I missed? By the way, the job runs at a box with SQL server 2005 installed.

    Read the article

  • Mysql migrate huge db from innodb to ndbcluster Err: the table is full

    - by Nguyen Trong Nhan
    I'm trying to migrate old database to mysql cluster (4 data nodes) by using command: ALTER TABLE sample ENGINE=NDBCLUSTER but I'm getting the following error: The table '#sql-7ff3_3' is full There are approximately 300 mil rows in this table. Here are my config file: /mysql-cluster/config.ini [NDBD DEFAULT] NoOfReplicas=2 DataDir=/data/mysql-cluster/ndb/ BackupDataDir=/data/mysql-cluster/backup/ DataMemory=10G IndexMemory=5G TimeBetweenLocalCheckpoints=6 FragmentLogFileSize=256MB NoOfFragmentLogFiles=50 MaxNoOfOrderedIndexes=8000 MaxNoOfConcurrentOperations=100000 MaxNoOfTables = 10000 RedoBuffer=128M MaxNoOfAttributes=5000 MaxNoOfUniqueHashIndexes=1024 /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/data/mysql-cluster/mysqld/ event_scheduler=on default-storage-engine=ndbcluster ndbcluster ndb-connectstring=192.168.x.x,192.168.x.x innodb_file_per_table innodb_buffer_pool_size = 512MB key_buffer = 512M key_buffer_size = 512M sort_buffer_size = 512M table_cache = 1024 read_buffer_size = 512M

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >