Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 273/439 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • How to add admin users in 389 LDAP, fedora directory server

    - by chandank
    I want to create couple of Admin users who have access to create/delete users on a particular group/Organization Unit. For example, User: uid=testadmin, ou=people, dc=my,dc=net Should have access to create new users/delete users under ou=People,dc=my,dc=net I tried with below ACI but did not work (target = "ldap:///ou=People,dc=my,dc=net")(targetattr = "*") (version 3.0;acl "testadmin Permissions";allow (proxy)(userdn = "ldap:///uid=testadmin,ou=people,dc=my,dc=net");) I am able to add administrative users from the Directory Server console, but this user data is not stored in ldif files and only stored in binary database at /var/lib/dirsrv/slap-ldap/db/. Only problem is these users have full power and I am not sure how to restrict their access.

    Read the article

  • verisign certificate into jboss server SSL

    - by rfders
    i'm trying to enable jboss to uses ssl protocol using a previously generated certificate from verisign, i imported both certificate, server certificate and ca certificate into the keytore file, and i configured the server.xml to use that keystore and activate ssl protocol, then when i run the jboss, I got this error "certificate or key corresponds to the SSL cipher suites which are enabled" Question, reading some post on internet, i found that every example was made it generating a Certificate Request, it stricly necesary to do that if i already have the server certificate and that CSR has to be imported into the keystore as well ? at this point i'm very confused about this issue, i tried almost every solutions posted in several forums but till now i haven't any luck !! can you give me some tips in order to solve this problem. thanks in advance this are my keystore file: Keystore type: jks Keystore provider: SUN Your keystore contains 2 entries j2ee, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): 69:CC:2D:2A:2D:EF:C4:DB:A2:26:35:57:06:29:7D:4C ugent, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): AC:D8:0E:A2:7B:B7:2C:E7:00:DC:22:72:4A:5F:1E:92 and my server.xml configuration:

    Read the article

  • Does exist an MKV meta tag / metadata editor?

    - by Vittorio
    Hi, I'm looking for an MKV meta tag editor, I'm using PLEX Media Server and PLEX Media Center on my iMac to see movies. PLEX is great because it automatically find and name all my movie library with year, director, gender, original title, description, movie poter art, etc. Unfortunately, it saves all the data only on a app DB file without edit any tag on the MKV files. A 20% of the movies needs to be fixed or PLEX needs help to find exactly the movie name, so if I need to move all my library elsewere, I need to do all the tagging work again. So, does exists MKV meta tag editor? Oh I'm a Mac user

    Read the article

  • SQL Server tempdb size seems large, is this normal?

    - by Abe Miessler
    From what I understand the system database is used to hold temporary tables, intermediate results and other temporary information. On one of my database instances I have a tempdb that is seems very large (30GB). This database has not been modified (as in "last modified date" on the mdf file) in over a week. Is it normal to have the temp db remain that large for that long of a period? It seems to me that it should be updating fairly often and returning space that it is using fairly quickly... Am I way off here or is SQL Server doing something weird? FYI: This is a SharePoint 2010 database, not sure if that makes a difference.

    Read the article

  • Redmine Subversion: LDAP _and_ local auth

    - by Frank Brenner
    I need to set up a subversion repository with apache authentication against both an external LDAP server as well as the local Redmine database. That is, we have users whose accounts exist only in the LDAP directory and some users whose accounts only exist in the local Redmine db - all should be able to access the repo. I can't quite seem to get the apache config right for this. I know I saw a how-to for this at some point, I think using Redmine.pm, but I can't seem to find it anymore.. Thanks.

    Read the article

  • Shell script for replacing string in all PHP-files, for each user

    - by Mads Skjern
    Each user has some php-files using a shared database commondb. I want to iterate over all users (in users.csv), and in their home folder (e.g. /home/joe) find all php files recursively, and replace each occurrence of "commondb" with their own databasename, e.g. "joedb" for "joe". I have tried the following: #!/bin/bash # Execute like this: # bash localize.bash users.csv OLDIFS=$IFS IFS="," while read name dummy do echo $name find /home/${name} -name '*.php' -exec sed -i '' 's/commondb/${name}db/g' "{}" \; done < $1 IFS=$OLDIFS for users.csv joe, Joe J george, George G It does not fail, but the files are unchanged. I am quite weak in bash, and I can't figure out how to debug it :/ Can my script be fixed to work?

    Read the article

  • SQL Server 2008 Log-shipping: Without a UNC drive: how?

    - by samsmith
    My real question here is... is there a tool I can use? (E.g. I have a lot to do, and would prefer not to script it all up myself!) Anyone using the redgate (hmmm, they had a tool for this, but I do not see it on their web site now...) I have a primary web app at rackspace. Am setting up a backup copy of the app in another data center. I want to use SQL log replication to sync the db. Using SQL Server Web Edition. TIA for suggestions and insight!

    Read the article

  • Download databasename.bak file

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm It is clear that either file i am downloading is corrupt or it is getting corrupted on the way? Any idea, why I am keep on receiving this error? I tried almost all ways but unable to fix this problem, please help me.

    Read the article

  • mysql single database relocation

    - by asdmin
    I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks

    Read the article

  • Wordpress network admin pointing to root as opposed to subdirectory

    - by Ian
    I've installed Wordpress on my nginx server in /blogs and new networks will be in /blogs/blogname. All my main site links point to example.com/blogs, but when I go to network admin the links point to http://www.example.com/wp-admin/network/ instead of http://www.example.com/blogs/wp-admin/network/ Here's the multisite section in my config: define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); $base = '/blogs'; define('DOMAIN_CURRENT_SITE', 'www.example.com'); define('PATH_CURRENT_SITE', '/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); If I try changing PATH_CURRENT_SITE to /blogs, I get a db connection error. Thanks.

    Read the article

  • backup sql databases, folders; 7ZIP and copy to ftp

    - by laurens
    Hi all, We are quite stuck with which solution to choose for this backup issue: What should happen: First, there should be an interface were to choose several sql databases (sort of checkboxes or whatsoever), also a few folders should be backed up - this could be part of the program or could be seperate, I think about an interface were to select folders, but a txt file (or xml) with paths-to-folders is as good. Next, everything should be 7Zipped, SQL-DB and files seperate. Eventually everything should be copied to a local network drive after which copied via FTP. Also important; it could be programmed or (partly) bought but I can't be one of those expensive backup tools $1000+ etc. I already found this fairly priced tool that does already most of the tasks 7ZIP and copy to ftp sqlbackupandftp.com For your information: we had a kind of self-made tool created by a collegue (some time ago) but it became very untrustworthy and as the databases grew it couldn't handle it anymore... moving on Please come up with suggestions. Thanks in advance!

    Read the article

  • How to properly backup mediawiki database (mysql) without messing up the data?

    - by Toto
    I want to backup a mediawiki database stored in a MySQL server 5.1.36 using mysqldump. Most of the wiki articles are written in spanish and a don't want to mess up with it by creating the dump with the wrong character set. mysql> status -------------- ... Current database: wikidb Current user: root@localhost ... Server version: 5.1.36-community-log MySQL Community Server (GPL) .... Server characterset: latin1 Db characterset: utf8 Client characterset: latin1 Conn. characterset: latin1 ... Using the following command: mysql> show create table text; I see that the table create statement set the charset to binary: CREATE TABLE `text` ( `old_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `old_text` mediumblob NOT NULL, `old_flags` tinyblob NOT NULL, PRIMARY KEY (`old_id`) ) ENGINE=InnoDB AUTO_INCREMENT=317 DEFAULT CHARSET=binary MAX_ROWS=10000000 AVG_ROW_LENGTH=10240 How should I use mysqldump to properly generate a backup for that database?

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Backup broken PostgreSQL 8.4 without pg_dump

    - by Daniil
    So. I have a problem. PostgreSQL 8.4 won't start or restart without any output given. But it worked for 3 monthes until hosting provider doesn't rebooted server. Now it is completly broken. It wan't start and doesn't give any output or log. pg_dump: [archiver (db)] connection to database "postgres" failed: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Now I want to backup (or just start pgsql socket) my database to reinstall postgesql. How?

    Read the article

  • Amazon EC2 - Unable to connect to MySQL

    - by alexus
    I'm having issue connecting from one VM to another # nmap -p3306 ip-XX-XX-XX-XX.ec2.internal Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-10 17:50 EDT Nmap scan report for ip-XX-XX-XX-XX.ec2.internal (XX.XX.XX.XX) Host is up (0.000033s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 1.05 seconds # in my Security Group I allowed Inbound connectivity via port TCP, portrange 3306 and Source 0.0.0.0/0, so theoratically it should work, but in reality it doesn't( I'm running red hat enterprise linux 7 on both VMs. mariadb.service running fine on another VM and I am able to connect to it locally. DB's: # netstat -anp | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2324/mysqld # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Any ideas what else I missed?

    Read the article

  • SSRS Errors "Use Local", even though I am

    - by Corey Coogan
    I am at a loss. I posted this on SO, but think this is probably a better place. I have searched high and low and don't know what to do. I am running SQL Server Web Edition on Server 2008, which only supports local databases. I am trying to connect to localhost, but when I test my connection, I get this error. The feature: "The edition of Reporting Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services. The DB was upgraded from SQL Express and when I select @@version, it says it's Web Edition. I've tried rebooting and that seemed to fix it, but only for a little while.

    Read the article

  • centos 100% disk full - How to remove log files, history, etc?

    - by kopeklan
    mysqld won't start because disk space is full: 101221 14:06:50 [ERROR] /usr/libexec/mysqld: Error writing file '/var/run/mysqld/mysqld.pid' (Errcode: 28) 101221 14:06:50 [ERROR] Can't start server: can't create PID file: No space left on device running df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda2 16G 3.2G 12G 23% / /dev/sda5 4.8G 4.6G 0 100% /var /dev/sda3 430G 855M 407G 1% /home /dev/sda1 76M 24M 49M 33% /boot tmpfs 956M 0 956M 0% /dev/shm du -sh * in /var: 12K account 56M cache 24K db 32K empty 8.0K games 1.5G lib 8.0K local 32K lock 221M log 16K lost+found 0 mail 24K named 8.0K nis 8.0K opt 8.0K preserve 8.0K racoon 292K run 70M spool 8.0K tmp 76K webmin 2.6G www 20K yp in /dev/sda5, there is website files in /var/www. because this is first time, I have no idea which files to remove other than moving /var/www to other partition And one more, what is the right way to remove log files, history, etc in /dev/sda5?

    Read the article

  • How can I uninstall a clustered SQL instance if the cluster has been destroyed?

    - by Bob
    First time going through this scenario, and apparently I did it very wrong. On the DB servers I deleted the cluster group that held SQL and Reporting Services. I then destroyed the cluster. Then I tried to uninstall SQL. No dice. SQL still thinks its part of the non-existant cluster and will not let me uninstall it. I went into the Maintenance menu of the SQL setup and tried to Remove Node...nope. Unless I find a way out of this I will have to rebuild the OS if I can't get SQL off the box.

    Read the article

  • How to monitor the size of files in Windows folder?

    - by zladuric
    What are some of good ways to automatically monitor the size of files in a directory and send warning email if they get close to a certain limit on a Windows server? I have a Progress DB installation to keep in check, and last week we hit some problems. Apparently, the size of extents has hit 2GB - and Progress won't work past that - we needed to open a new extent. I'm coming from a Linux environment, so I don't know what are the usual to monitor this in a Windows environment (or monitoring tools whatsoever). I prefer some generic solution, as I have a mixed environment (Windows 2000, Windows Server 2003, Windows Server 2008 R2). Thanks in advance for all usable alternative answers.

    Read the article

  • Techniques to Monitor cron tasks?

    - by Tristan Juricek
    Are there good techniques for monitoring cron tasks over a cluster? We're starting to use cron to launch tasks at daily intervals. A few ideas for checking out information: Add special application handling that logs information into some "network aware" place, like a DB Build up a logfile system that transfers the cron log periodically to a central point for processing/querying (along with other possible log files) I'm wondering if people have had success with doing things separately for cron versus other things, or, if the tasks were integrated into a different approach completely. I'm leaning towards #2, but I'd like to know what more experienced folk might try out.

    Read the article

  • Easy way to update apache on a server cluster with shared NFS conf?

    - by Simon
    we have a server setup where a server cluster connected with a db/files/conf server shared by nfs serve our sites, behind an Elastic Load Balancer at Amazon EC2. The setup works correctly, but keeping it up to date is becoming like hell, because the apache/php conf that webservers use is shared through NFS. So, if we try to run an apt-get upgrade on a server on the cluster, it will abort it due to the webserver is not able to write back the configuration to the nfs server. Every time we want to update the machines, or install a package like php-curl, we need to create a new ami, so the changes will reflect on the new launched amis. Could it be another way of doing the things simpler? Thanks in advance!

    Read the article

  • Create a tunnel to my dedicated windows server

    - by Mobiz
    I have a Win 2008 dedicated server. The remote access for MSSQL db is disabled. However I want to connect to it during development from my system. I need to create something like a tunnel from my lap so as to access it. I don't have static IP. Another reason for mentioning about creating tunnel is that my server IP has been whitelisted with other server. The data must originate from my dedicated server then only I can do the testing.

    Read the article

  • setting up a private network using linksys router

    - by user287745
    scenerio:- a database server running sql server 2005 and sql server management studio 2005 express editions a web server running IIS 5.0v using windows xp pro. two other computer having windows xp and windows 98 i have a linksys router which i use to access point for wireless (laptop) there are 5 sockets behind it four for clients and one for internet. i would like to setup a LAN- something like a private hosting area with two clients. would should i do? where to connect what and what would the changes in settnigs be. right now it uses dhcp or something to assign ips. where will the webserver be attached to the internet socket? where will the db server be attached? any guide, links, help thank you

    Read the article

  • Wordpress Installation on Two Servers - Loadbalancing

    - by rihatum
    Hi All, I have to install wordpress (One Blog, one domain, for e.g. mycompany.com/blog) on two servers sharing one database on a different server, these two servers are behind a loadbalancer and the db would be on another server. We are planning this way due to high traffic. I have done standalone wordpress installations on a single server, on windows 2003, 2008 with IIS6, 7 etc I am just researching as to how would I implement this. What would be the steps to achieve this and upon searching I saw some posts regarding the wp-content/uploads directory to be synced at regular intervals ? your help much appreciated Thanks for reading

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >