Search Results

Search found 68715 results on 2749 pages for 'mysql data'.

Page 66/2749 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • /data/tmp on database server?

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. My teacher gave out an assignment which mentioned "copy cars.dat to /data/tmp on the MySQL database server" without any explanations, I do not know what is the "/data/tmp on database server" means exactly? Basically after that I need to execute SQL statement like LOAD DATA INFILE '/data/tmp/cars.dat' INTO TABLE cars So, what does copy cars.dat to /data/tmp on the database server means as there is no /data/tmp directory even? Personally, I checked /etc/mysql/my.cnf file, inside which there are definitions of : ... basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp ... Does it mean to copy cars.dat to the tmpdir which is just /tmp under root directory??

    Read the article

  • MySQL database Int overflow and can't login in.

    - by Ryan SMith
    I have a MySQL database on my server and I"m pretty sure it's an int over flow on one table with an auto_increment field that's crashing it. I can delete the table, it's not very important, but I can't get into the server. Is there anyway to delete that database from the file system or without logging into MySQL? HELP! THE WORLD IS ENDING!

    Read the article

  • Does PHP 5.3 +PDO plays nicely with MySql 5.1

    - by Itay Moav
    Since the last few weeks I see php 5.3 has become a part of the official repositories of several linux distributions, So I guess it is stable enough. Mysql announced they will stop support mysql 5.0 So will those two play well together, are all the extensions up to date?

    Read the article

  • Determine Configured Location of MySQL's data directory OR all loaded *.cfn Locations

    - by alanstorm
    I'm not a sys-admin, but sometimes I play one at work. I've inherited a virtual server that had MySQL installed from source. I'm gathering as much information about the install as I can (original people who installed it are, of course, not a resource). How can I find The default/current location of the MySQL binary files (often stored in a directory named data?) Any default or custom loaded cnf files? Looking for solutions that are a bit more sophisticated than a find / -iname '*.cnf' :)

    Read the article

  • MySQL database Int overflow and can’t login in.

    - by Ryan Smith
    I have a MySQL database on my server and I"m pretty sure it's an int over flow on one table with an auto_increment field that's crashing it. I can delete the table, it's not very important, but I can't get into the server. Is there anyway to delete that database from the file system or without logging into MySQL? HELP! THE WORLD IS ENDING!

    Read the article

  • how do I allow mysql connections through selinux

    - by xivix
    I'd like to for once leave selinux running on a server for the alleged increased security. I usually disable selinux to get anything to work. How do I tell selinux to allow mysql connections? The most documentation I've found is this line from mysql.com: If you are running under Linux and Security-Enhanced Linux (SELinux) is enabled, make sure you have disabled SELinux protection for the mysqld process. wow ... that's really helpful.

    Read the article

  • MySQL query cache is enabled but not being used

    - by Yoga
    I've checked the query cache is enabled mysql> SHOW VARIABLES LIKE 'have_query_cache'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | have_query_cache | YES | +------------------+-------+ 1 row in set (0.00 sec) But seems it is not being used mysql> SHOW STATUS LIKE 'Qcache%'; +-------------------------+----------+ | Variable_name | Value | +-------------------------+----------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 16759648 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 21555882 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 1 | +-------------------------+----------+ 8 rows in set (0.00 sec) Any reason?

    Read the article

  • mysqldump is not dumping my data

    - by oompahloompah
    I am running mysqldump on Ubuntu Linux (10.0.4 LTS) my mySQL version info is: mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 I used the following command: mysql -u username -p dbname dbname_backup.sql However when I opened the generated .sql file, I saw that most of the tables had only the schema dumped and in the few cases where the actual data was dumped, only 1 or two records were dumped (there are ATLEAST several tens of records in each table). Does anyone know what maybe going on?

    Read the article

  • Installing mysql-server with python ssh connection

    - by mrbox
    I'm writing a script in Python, which is connecting to server via ssh, then installing some packages. But there is problem with dialogue box, where i can type in a root password- i don't know how to send data there. Once I tried to do this, my apt(using Debian Lenny) gone crazy. Here is some info: - Debian Lenny - Using PySSH with easier interface, code looks like this: clientSSH = SSHClient( self.ip, 'root', self.rootPassword, None ) clientSSH.login() clientSSH.run_command('apt-get install mysql-server mysql-client php5') clientSSH.run_command('Y') #I Don't know how send root passwd here clientSSH.logout()

    Read the article

  • how do I allow mysql connections through selinux

    - by xivix
    I'd like to for once leave selinux running on a server for the alleged increased security. I usually disable selinux to get anything to work. How do I tell selinux to allow mysql connections? The most documentation I've found is this line from mysql.com: If you are running under Linux and Security-Enhanced Linux (SELinux) is enabled, make sure you have disabled SELinux protection for the mysqld process. wow ... that's really helpful.

    Read the article

  • Access denied for user 'diduser'@'localhost' to database 'diddata' (1044, 42000)

    - by Arlen Beiler
    I am trying to setup a MySQL server and when I went to create a second user it wouldn't give it permissions for the database. I can connect fine as long as I don't specify a database. Access denied for user 'user'@'localhost' to database 'diddata' The connection details are: { 'host' : 'localhost', 'user' : 'user', 'password' : 'password' , 'database': 'diddata' }; And to create the DB and table I did: CREATE DATABASE IF NOT exists diddata; CREATE USER 'user'@'localhost' IDENTIFIED BY 'password'; GRANT ALL ON user.* TO 'user'@'localhost'; Note that I've changed the username and password in this question. I've already checked the privileges in MySQL workbench and they are there.

    Read the article

  • Best approach for Synchronization Mysql databases using C# [closed]

    - by nirmal90
    I have a requirement as below. Windows application in c# with MySQL database MySQL database in both local and server One centralized server with many client synchronizing the server database at each time when the new entry or update is happen in local machine The server data also needs to be updated in local at regular intervals in order to avoid conflicts I need to know what is the best approach to follow to make this synchronization without any conflicts.

    Read the article

  • MySQL for SQL Server DBAs

    - by SQL3D
    I've been tasked with taking over the administration of a MySQL installation (running on Red Hat Linux) that will become fairly critical to our business in the near future. I was wondering if anyone could recommend some resources in regards to administering MySQL for DBAs already experienced with other relational database (SQL Server and some Oracle in my case). Specifically I'm looking for information around disaster recovery as well as high availability to start with, but I do want to get well rounded with the entire system. Thanks in advance, Dan

    Read the article

  • creation of accounts for each users in mysql / phpmyadmin

    - by user1666411
    I am planning to create mysql accounts for each of my web devs and build their own databases. I need these devs to have accounts to access their own phpmyadmin where they can manipulate their own sets of databases. I am kind of new to web services deployment, so should this setup be configured in phpmyadmin or in mysql? Will this kind of deployment need web management like cpanel? I hope you can enlighten me with this.

    Read the article

  • Allowing wildcard (%) access on MySQL db, getting error "access denied for '<user>'@'localhost'"

    - by Wayne M
    I've created a database and a user, and allowed access via the following: create user 'someuser'@'%' identified by 'password'; grant all privileges on somedb.* to 'someuser' with grant option; however, when I try to connect to MySQL I get the following error: $ mysql -u someuser -p > Enter Password: > ERROR 1045 (28000): Access denied for user 'someuser'@'localhost' (using password: YES) If "%" is the wildcard, then wouldn't it also enable localhost?

    Read the article

  • MySQL not respond when overheaded

    - by Michal Gow
    I have few Drupal 6 websites on webhosting, which causes this strange problem: some tables, especially Cache and Watchdog, tend to overhead, when overhead is bigger than some amount of kB, MySQL server is refusing connection to given Drupal database or connection is broken during query execution, Optimizing table (just overheaded rows) in phpMyAdmin is putting all back to normal. But - until database is optimized, site is showing just MySQL errors, which is ugly... Where is a problem? Thank you for any hints I could pass back to the hosting admins!

    Read the article

  • MySQL Continuosuly crahsing

    - by Phanindra
    I am continuously receiving below error in event viewer... from which it stop mysql service... This is the following error, Faulting application mysqld-nt.exe, version 0.0.0.0, faulting module mysqld-nt.exe, version 0.0.0.0, fault address 0x0022401c. When I checked MySQL error log file, there is no ERROR or WARNING message regarding crash, it is showing normal shutdown. Can any help me out of this.

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • Simple ADF page using BAM Data Control

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Purpose : In this blog I will walk you through very simple steps to create an ADF page using BAM data control connection.Details : Create the projectOpen JDeveloper (make sure you have installed the SOA extension for JDev)Create new Application using "Generic Application" template.Click on "Next"Shuttle  "ADF Faces" to right pane for the project technology.Click "Finish"Create a BAM connectionIn the resource palette click on "Folder->New Connection -> BAM"Enter the connection name and click "Next"Enter Connection details Click on "Test connection" and "Finish"Create the BAM Data Control Open the IDE connection created in above step.Drag and drop "Employees" to "Data controls" palette.Select "Flat Query" and Click "Finish".Create the View Create a new JSF page.From Data control Panel drag and drop "Employees->Query->ADF Read Only table"Right click and Run the page.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

  • Can JSON be made easily and safely editable by the non-technical Excel crowd?

    - by glitch
    I'm looking for a data storage format that's very intuitive and easy to edit. It should be ideally targeted towards the same crowd as Excel. At the same time I would like the data structure to be a tree. Ideally this would be JSON, since it offers both the tree aspect and allows for more interesting constructs like arrays. That and parsing libraries for JSON are ubiquitous, so I don't have to reinvent the wheel. The problem is that, at least with a non-specialized text editor, JSON is a giant pain to edit for a non-technical user. I'm thinking along the lines of someone who might have used Excel in the past, but never a real text editor. Someone who might not be comfortable with the idea of preserving JSON syntax by hand. Are there data formats out there that would fit this profile? I'd very much prefer this to be a JSON actually, but then it would require a solid editing tool that would hide the underlying implementation from the user. Think Excel and how it abstracts CSV syntax from the user. The reason I'm looking for something like this is because the team has been working with pretty hierarchical data for a while now and we've hit the limits of how easy it is to represent in simple CSVs without having to create complex rules for how represent hierarchy semantics from each row. Any suggestions?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >