Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 52/402 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Help with search query in Codeigniter

    - by Indigo
    Hi All, Im still fairly new to codeigniter and am wondering if someone can help me with this please? Im just trying to do a very basic search query in Codeigniter, but for some reason, the results are ignoring my "status = published" request... The code is: $this->db->like('title', $term); $this->db->or_like('tags', $term); $data['results'] = $this->db->get_where('resources', array('status' => 'published')); And this dosent work either: $this->db->like('title', $term); $this->db->or_like('tags', $term); $this->db->where('status', 'published'); $data['results'] = $this->db->get('resources'); Im sure its something basic? Help please?

    Read the article

  • Resources access problem from JAR

    - by aeter
    I have the following directory structure of a project: Folder "project" in Eclipse: --folder "src" --folder "resources" ----trayicon2.png --folder "db" ----test.db --folder "bin" I'm accessing the image with: Image image = Toolkit.getDefaultToolkit().getImage("resources/trayicon2.png"); and from Eclipse that is not a problem. Then I generate an "executable jar file", and add the dirs, making a directory structure of: Folder "project" --folder "db" ----test.db --folder "resources" ----trayicon2.png --project.jar And now the image is no more accessible. Also, the database is no more accessible; while in Eclipse I used to access it with: Connection conn = DriverManager.getConnection("jdbc:sqlite:db/test.db"); How can I access the resources (images and db) after generating the jar file "project.jar"?

    Read the article

  • When does the DataContext will open a connection to the DB?

    - by Eran Betzalel
    I am using L2S to access my MSSQL 2008 Express server. I would like to know when the DataContext will open a connection to the DB? and will it close the connection right after it opened it? For example: var dc = new TestDB(); // connection opened and closed? dc.SomeTable.InsertOnSubmit(obj); // connection opened and closed? foreach(var obj in dc.SomeTable.AsEnumerable()) // connection opened and closed? { ... // connection opened and closed? } dc.SubmitChanges(); // connection opened and closed?

    Read the article

  • Doctrine: Mixing YAML markup and db manager (navicat) editing?

    - by ropstah
    I think the answer to this question should be: No. However I hope to be corrected. I'd like to edit our database using a mixture of YAML markup + Doctrine createTables() and Navicat editing. Can I maintain the inheritance which is marked up? Example (4 steps, at step 4, Doctrine is in no way able to re-create the inheritance schema... or is it?): Step 1: Create YAML with inheritance --- Entity: columns: username: string(20) password: string(16) created_at: timestamp updated_at: timestamp User: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 1 Group: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 2 Step 2: Create tables using Doctrine (and drop/create db if nessecary) Created sql: CREATE TABLE entity (id BIGINT AUTO_INCREMENT, username VARCHAR(20), password VARCHAR(16), created_at DATETIME, updated_at DATETIME, type VARCHAR(255), PRIMARY KEY(id)) ENGINE = INNODB Step 3: Edit table using Navicat Step 4: Refresh YAML file because of 'external' edits...

    Read the article

  • PHP reporting error. DB verify how to?

    - by iamfab
    Error reporting Notice: Undefined variable: random_chars in wamp\www\php_sandbox\idgen.php on line 21 Call Stack: # Time Memory Function Location 1 0.0045 678928 {main}( ) ..\idgen.php:0 GPB7446 How do I fix this error? Using this code like an automatic unique id generator. How do I connect to DB to verify code is truly unique before allowing it to be assigned to a user creating a new account? Thanks <?php $characters = array( "A","B","E","F","G","H","J","K","M","N","P","R","S","T","W","X","Y","Z"); $keys = array(); while(count($keys) < 3) { $x = mt_rand(0, count($characters)-1); if(!in_array($x, $keys)) { $keys[] = $x; } } foreach($keys as $key){ $random_chars .= $characters[$key];} $randNum = rand(2327,9987); $randLet = rand(2327,9987); echo $random_chars . $randNum; ?>

    Read the article

  • How to delete rows based on comparison from Data Flow Task in an SSIS?

    - by vikasde
    I have a DataFlow task with two OLE DB Source objects. This is the SQL I want to achieve using SSIS: Insert into server2.db.dbo.[table2] (...) Select col1, col2, col3 ... from Server1.db.dbo.[table1] where [table1.col1] not in (Select col5 from server2.db.dbo.[table2] Where ...) I am pretty new to SSIS and not sure how to achieve this. I thought I could do this using the Data Flow task and populating the first source with the data from server1.db.dbo.table1 and the second source with server2.db.dbo.[table2] and then do the conditional check before inserting it into server2.db.dbo.[table2]. I am not sure how to do the conditional check though. Any help is appreciated.

    Read the article

  • linq to sql update data

    - by pranay
    can i update my employee record as given in below function or i have to make query of employee collection first and than i update data public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); db.App3_EMPLOYEEs.Attach(employee); db.SubmitChanges(); return employee.PKEY; } or i have to do this public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); App3_EMPLOYEE emp = db.App3_EMPLOYEEs.Single(e => e.PKEY == employee.PKEY); db.App3_EMPLOYEEs.Attach(employee,emp); db.SubmitChanges(); return employee.PKEY; } But i dont want to use second option is there any efficient way to update data

    Read the article

  • KindError: Property r must be an instance of SecondModel, why ?

    - by zjm1126
    class FirstModel(db.Model): p = db.StringProperty() r=db.ReferenceProperty(SecondModel) class SecondModel(db.Model): r = db.ReferenceProperty(FirstModel) class sss(webapp.RequestHandler): def get(self): a=FirstModel() a.p='sss' a.put() b=SecondModel() b.r=a b.put() a.r=b a.put() self.response.out.write(str(b.r.p)) the error is : Traceback (most recent call last): File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 511, in __call__ handler.get(*groups) File "D:\zjm_code\helloworld\a.py", line 158, in get a.r=b File "D:\Program Files\Google\google_appengine\google\appengine\ext\db\__init__.py", line 3009, in __set__ value = self.validate(value) File "D:\Program Files\Google\google_appengine\google\appengine\ext\db\__init__.py", line 3048, in validate (self.name, self.reference_class.kind())) KindError: Property r must be an instance of SecondModel thanks

    Read the article

  • How do you insert new entries into an Access db table through an ASP.net website?

    - by CGF
    I need to insert new records into an Access database. I'm using Visual Studio 2008 and firstly create a asp.net website. I can connect to the information in Access databse using dataview or gridview and can query a particular entry (ie. Proposal No. -brings up all details linking to that proposal). I can then edit the details of that proposal and this would update the Access Db. What I need to do is to have a form that simply enters new details for a new customer. ie. Enter name [__] Enter Adress[__]. Then for this to update the database. By using the gridview or dataview I am able to view all fields that exist in the table and edit them. Is there a way that I can get a blank gridview/dataview template (which includes all the fields in the table) and fill it out to then update the database? Thanks

    Read the article

  • Most efficient way to Update with Linq2Sql

    - by pranay
    can I update my employee record as given in below function or i have to make query of employee collection first and than i update data public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); db.App3_EMPLOYEEs.Attach(employee); db.SubmitChanges(); return employee.PKEY; } or i have to do this public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); App3_EMPLOYEE emp = db.App3_EMPLOYEEs.Single(e => e.PKEY == employee.PKEY); db.App3_EMPLOYEEs.Attach(employee,emp); db.SubmitChanges(); return employee.PKEY; } But i dont want to use second option is there any efficient way to update data

    Read the article

  • Why is it inserting 0's instead of blank spaces into my DB using php?

    - by zeckdude
    I have an insert: $sql = 'INSERT into orders SET fax_int_prefix = "'.$_SESSION['fax_int_prefix'].'", fax_prefix = "'.$_SESSION['fax_prefix'].'", fax_first = "'.$_SESSION['fax_first'].'", fax_last = "'.$_SESSION['fax_last']; The value of all of these fields is that they are blank right before the insert. Here is an example of one of them I echo'd out just before the insert: $_SESSION[fax_prefix] = For some reason it inserts the integer 0, instead of a blank value or null, as it should. Why is it inserting 0's instead of blank spaces into my DB?

    Read the article

  • In Rails models; for symbols get automatically converted to YAML when saving to DB. What is the corr

    - by Ram
    In my model example Game, has a status column. But I usually set status by using symbols. Example self.status = :active MATCH_STATUS = { :betting_on => "Betting is on", :home_team_won => "Home team has won", :visiting_team_won => "Visiting team has one", :game_tie => "Game is tied" }.freeze def viewable_status MATCH_STATUS[self.status] end I use the above Map to switch between viewable status and viceversa. However when the data gets saved to db, ActiveRecord appends "--- " to each status. So when I retrieve back the status is screwed. What should be the correct approach?

    Read the article

  • php mysqli help, first line in DB not being returned?

    - by williamsongibson
    Here is my code <?php require_once 'connect.php'; $sql = "SELECT * FROM `db-pages`"; $result = $mysqli->query($sql) or die($mysqli->error.__LINE__); while ($row = $result->fetch_assoc()) { echo($row['pagetitle'].' - To edit this page <a href="editpage.php?id='.$row['id'].'">click here</a><br>'); } } ?> I've added a couple more rows to the Database and it's returning them all, apart from id=1 in the DB. Any idea why?

    Read the article

  • Errors trying to run MongoDB

    - by SomeKittens
    I'm running Ubuntu Server 12.04 (32 bit) on an old (1998) computer. Everything's working fine until I try and start MongoDB. somekittens@DLserver01:~$ mongo MongoDB shell version: 2.2.2 connecting to: test Sun Dec 16 22:47:50 Error: couldn't connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91 exception: connect failed Googling the error lead me to all sorts of "repair" options, none of which fixed anything. I've also removed MongoDB and installed it again (using apt-get, have not built from source). Mongo's log shows the following error: Thu Dec 13 18:36:32 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Thu Dec 13 18:36:32 Thu Dec 13 18:36:32 [initandlisten] MongoDB starting : pid=758 port=27017 dbpath=/var/lib/mongodb 32-bit host=DLserver01 Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Thu Dec 13 18:36:32 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Thu Dec 13 18:36:32 [initandlisten] ** with --journal, the limit is lower Thu Dec 13 18:36:32 [initandlisten] Thu Dec 13 18:36:32 [initandlisten] db version v2.2.2, pdfile version 4.5 Thu Dec 13 18:36:32 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Thu Dec 13 18:36:32 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Thu Dec 13 18:36:32 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log" } Thu Dec 13 18:36:32 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/var/lib/mongodb/journal" ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Thu Dec 13 18:36:32 [initandlisten] exception in initAndListen: 12596 old lock file, terminating Thu Dec 13 18:36:32 dbexit: Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close listening sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to flush diaglog... Thu Dec 13 18:36:32 [initandlisten] shutdown: going to close sockets... Thu Dec 13 18:36:32 [initandlisten] shutdown: waiting for fs preallocator... Thu Dec 13 18:36:32 [initandlisten] shutdown: closing all files... Thu Dec 13 18:36:32 [initandlisten] closeAllFiles() finished Thu Dec 13 18:36:32 dbexit: really exiting now Running through the recovery instructions lead to the following adventure: somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:42:54 Sun Dec 16 22:42:54 [initandlisten] MongoDB starting : pid=1887 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:42:54 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:42:54 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:42:54 [initandlisten] Sun Dec 16 22:42:54 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:42:54 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:42:54 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:42:54 [initandlisten] options: { repair: true } Sun Dec 16 22:42:54 [initandlisten] exception in initAndListen: 10296 ********************************************************************* ERROR: dbpath (/data/db/) does not exist. Create this directory or give existing directory in --dbpath. See http://dochub.mongodb.org/core/startingandstoppingmongo ********************************************************************* , terminating Sun Dec 16 22:42:54 dbexit: Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:42:54 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:42:54 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:42:54 [initandlisten] shutdown: closing all files... Sun Dec 16 22:42:54 [initandlisten] closeAllFiles() finished Sun Dec 16 22:42:54 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data somekittens@DLserver01:/var/log/mongodb$ sudo mkdir /data/db somekittens@DLserver01:/var/log/mongodb$ mongod --repair Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:43:51 Sun Dec 16 22:43:51 [initandlisten] MongoDB starting : pid=1909 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:43:51 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:43:51 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:43:51 [initandlisten] Sun Dec 16 22:43:51 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:43:51 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:43:51 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:43:51 [initandlisten] options: { repair: true } Sun Dec 16 22:43:51 [initandlisten] exception in initAndListen: 10309 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating Sun Dec 16 22:43:51 dbexit: Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:43:51 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:43:51 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:43:51 [initandlisten] shutdown: closing all files... Sun Dec 16 22:43:51 [initandlisten] closeAllFiles() finished Sun Dec 16 22:43:51 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:43:51 [initandlisten] couldn't remove fs lock errno:9 Bad file descriptor Sun Dec 16 22:43:51 dbexit: really exiting now somekittens@DLserver01:/var/log/mongodb$ service mongodb stop stop: Unknown instance: somekittens@DLserver01:/var/log/mongodb$ sudo mongod --repair Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. Sun Dec 16 22:45:04 Sun Dec 16 22:45:04 [initandlisten] MongoDB starting : pid=1921 port=27017 dbpath=/data/db/ 32-bit host=DLserver01 Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sun Dec 16 22:45:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations Sun Dec 16 22:45:04 [initandlisten] ** with --journal, the limit is lower Sun Dec 16 22:45:04 [initandlisten] Sun Dec 16 22:45:04 [initandlisten] db version v2.2.2, pdfile version 4.5 Sun Dec 16 22:45:04 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267 Sun Dec 16 22:45:04 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 Sun Dec 16 22:45:04 [initandlisten] options: { repair: true } Sun Dec 16 22:45:04 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/journal" Sun Dec 16 22:45:04 [initandlisten] finished checking dbs Sun Dec 16 22:45:04 dbexit: Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close listening sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to flush diaglog... Sun Dec 16 22:45:04 [initandlisten] shutdown: going to close sockets... Sun Dec 16 22:45:04 [initandlisten] shutdown: waiting for fs preallocator... Sun Dec 16 22:45:04 [initandlisten] shutdown: closing all files... Sun Dec 16 22:45:04 [initandlisten] closeAllFiles() finished Sun Dec 16 22:45:04 [initandlisten] shutdown: removing fs lock... Sun Dec 16 22:45:04 dbexit: really exiting now Which didn't change anything. What can I do to resolve this? It's an old computer (640MB RAM, single-core P2). Could that be causing it?

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Why would the 'show processlist' command speed up normally slow requests to my remote DB? (connected via VPN)

    - by Hakan B.
    I am running a local Django development server that connects to a remote MySQL server via a VPN (IPSec). Request times are awfully slow and I consistently see timeouts. Attempting to diagnose the problem, I logged in to the remote database and ran: show full processlist Immediately, the local server went from idle to working. The page had not yet completely loaded, but progress had been made (debug logs confirm this). When I ran 'show full processlist' several times more in succession, the request completed quickly. I can currently reproduce this - unless I run 'show full processlist' over and over on the remote server, my local request usually times out. Does anyone have any idea why this would happen? I'm running Django 1.3 and OS X 10.7. Note: I realize this may be entirely not be a question with a clear-cut answer and is probably my fault, but it is odd and reproducable, so I hope someone can at least point me the right direction. Thanks in advance.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Access Java based keystore directly on Sun ONE Webserver 6.1

    - by George Bailey
    The keystore seems to reside in one of /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db What tool would I use to access this file? I have tried these commands which did not work. /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -certreq -keyalg RSA -file /tmp/test.csr -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -certreq -keyalg RSA -file /tmp/test.csr -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -list -storepass password -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-cert8.db /opt/SUNWwbsvr/bin/https/jdk/bin/keytool -list -storepass password -keystore /opt/SUNWwbsvr/alias/https-sub.domain.ext-hostname-key3.db They all gave me the error message keytool error: java.io.IOException: Invalid keystore format

    Read the article

  • I want to install and get to building a personal MySQL DB on 64 bit Ubuntu [closed]

    - by Ari Hall
    So how do I go from installing MySQL from the Software Center to inputing data into fields and bringing in a comma delimited file? I've only had brief experience with MSAccess and OOo Base a long time ago, so details are appreciated, I just want to get up and running. I have Ubuntu 10.10, 64 bit, if that affects much. If you can link me to a howto that does exactly what I'm looking for, that would work. Again, Thanks!

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • Wordpress redirects to itself endlessly

    - by iTayb
    I've just upgraded to last version (2.9.1) from kinda late version (2.2.1). After the upgrade I've realized that you cannot access wordpress from my .com domain, however it is possible via other subdomain! db-he.110mb.com works fine while http://www.db-he.com doesn't. That both redirect to the same server, and configurations are fine. However you cannot surf index.php (which is wordpress'). www.db-he.com/index.php is doing a permanent redirect to www.db-he.com/index.php for some reason. Problem is with wordpress only. All other files works fine. For example, changes.txt can be accessed from both links: www.db-he.com/changes.txt db-he.110mb.com/changes.txt For some reason, it seems more a server problem than a wordpress problem. What can I do?

    Read the article

  • How do I convert a Mac OS Filemaker 2 database to a recent FM or Bento db, preserving the relations

    - by willc2
    I'm hoping for more than just exporting the data, I would like to preserve the relation between the databases. This is for a friend's legacy database that tracks monthly fees from a list of clients. I have the original FM database file on hand, but not the machine it ran on with the old version of Filemaker 2. Recent versions won't import it, saying it's too old. If there is a Mac-only solution that would make things simpler for me.

    Read the article

  • How do I connect to my InterBase db on a Windows Server from Delphi XE?

    - by chris
    I have installed InterBase on a Windows Server 2008, added a database through IBConsole and I'm trying to set up a connection to it from Delphi XE. I've clicked on the DataExplorer tab and right clicked "INTERBASE/Add New Connection". This is where I am having trouble setting it up correctly. Anyone know this stuff? I'm waaaay out of my comfort zone here but I'm helping some people out and I'm trying to get this to work myself so that I hopefully can fix the problems they are having.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >