Search Results

Search found 14016 results on 561 pages for 'mysql like'.

Page 212/561 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • Do I need the text "_size" in the my.cnf file for mysql 5.1?

    - by chongman
    This is a pretty simple question about setting parameters in the my.cnf file for mysql 5.1. This page gives me the parameters I can tune: http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html and so I think I would need to write key_buffer_size = 256M But when I open my current my.cnf, it has the line: key_buffer = 16M My question is, do I need "key_buffer_size" or "key_buffer" or does it not matter which I use? And, how would I know if something in the my.cnf is incorrect? Where's the daemon start log file? I am running ubuntu; I think version 8.04 LTS

    Read the article

  • is there a limit of merge tables with Mysql ?

    - by sysko
    I'm working on a database with mysql 5.0 for an open source project it's used to stored sentences in specific languages and their translations in other languages I used to have a big table "sentences" and "sentences_translations" (use to join sentences to sentences) table but has we have now near one million entries, this begin to be a bit slow, moreover, most of request are made using a "where lang =" so I've decided to create a table by language sentences_LANGUAGECODE and sentences_translation_LANGSOURCE_LANGTARGET and to create merge table like this sentences_ENG_OTHERS which merge sentences_ENG_ARA sentences_ENG_DEU etc... when we want to have the translations in all languages of an english sentence sentences_OTHERS_ENG when we want to have only the english translations of some sentences I've created a script to create all these tables (they're around 31 languages so more than 60 merge table), I've tested, that works really great a request which use to take 160ms now take only 30 :) but I discover that all my merge table after the 15th use to have "NULL" as type of storage engine instead of MRG_MYISAM, and if delete one, then I can create an others, using FLUSH table between each creation also allow me to create more merge tables so is this a limitation from mysql ? can we override it ? thanks for your answers

    Read the article

  • Can MySQL automatically specify `_utf8` for inserts to UTF-8 columns?

    - by Neil
    I have a table like this, where one column is latin1, the other is UTF-8: Create Table: CREATE TABLE `names` ( `name_english` varchar(255) character NOT NULL, `name_chinese` varchar(255) character set utf8 default NULL, ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I do an insert, I have to type _utf8 before values being inserted into UTF-8 columns: insert into names (name_english = "hooey", name_chinese = _utf8 "??"); However, since MySQL should know that name_chinese is a UTF-8 column, it should be able to know to use _utf8 automatically. Is there any way to tell MySQL to use _utf8 automatically, so when I'm programatically making prepared statements, I don't have to worry about including it with the right parameters?

    Read the article

  • How to insert large files in mysql database using php? [closed]

    - by anjan
    Hi! I want to upload a large file of size 10M max to my mysql database. Using .htaccess i changed the PHP's own file upload limit to "10485760" = 10M, i am able to upload files upto 10M size without any problem. But i can not insert the file in database if it is more that 1M in size. i am using file_get_contents to read all file data and pass it to the insert query as a string to be inserted into a LONGBLOB field. But files with more than 1M size is not being added to database, though i can use print_r($_FILES) to examine that the file uploaded correctly. Any help will be appreciated and i will need it within next 6 hours. So, please help! best regards, Anjan * This is a duplicate of http://stackoverflow.com/questions/492549/how-can-i-insert-large-files-in-mysql-db-using-php *

    Read the article

  • Which MySQL Datatype to use for storing boolean values from/to PHP?

    - by Beat
    Since MySQL doesn't seem to have any 'boolean' datatype, which datatype do you 'abuse' for storing true/false information in MySQL? Especially in the context of writing and reading from/to a PHP-Script. Over time I have used and seen several approaches: tinyint, varchar fields containing the values 0/1, varchar fields containing the strings '0'/'1' or 'true'/'false' and finally enum Fields containing the two options 'true'/'false'. None of the above seems optimal, I tend to prefer the tinyint 0/1 variant, since automatic type conversion in PHP gives me boolean values rather simply. So which datatype do you use, is there a type designed for boolean values which I have overlooked? Do you see any advantages/disadvantages by using one type or another?

    Read the article

  • You have an error in your SQL syntax; check the manual that corresponds to your MySQL

    - by LuisEValencia
    I am trying to run a mysql query to find all occurences of a text. I have a syntax error but dont know where or how to fix it I am using sqlyog to execute this script DECLARE @url VARCHAR(255) SET @url = '1720' SELECT 'select * from ' + RTRIM(tbl.name) + ' where ' + RTRIM(col.name) + ' like %' + RTRIM(@url) + '%' FROM sysobjects tbl INNER JOIN syscolumns col ON tbl.id = col.id AND col.xtype IN (167, 175, 231, 239) -- (n)char and (n)varchar, there may be others to include AND col.length > 30 -- arbitrary min length into which you might store a URL WHERE tbl.type = 'U' -- user defined table 1 queries executed, 0 success, 1 errors, 0 warnings Query: declare @url varchar(255) set @url = '1720' select 'select * from ' + rtrim(tbl.name) + ' where ' + rtrim(col.name) + ' like %' ... Error Code: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'declare @url varchar(255)

    Read the article

  • How to crosscheck two tables and insert relevant data into a new table in MYSQL?

    - by JackDamery
    I'm trying to crosscheck a row that exists in two tables using a MYSQL query in phpmyadmin and then if a userID is found in both tables, insert their userID and user name into another table. Here's my code: INSERT INTO userswithoutmeetings SELECT user.userID IF('user.userID'='meeting.userID'); I keep getting plagued by this error: 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF('user.userID'='meeting.userID')' at line 3 Other statements I've tried have worked but not deposited the values in the table.

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • Database Replication check script not running

    - by Tarun
    I'm trying to create a Database Replication checking script but I'm getting error while executing it. Here is the script #!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin export PATH #Server Name Server="Test Server" #My Sql Username and Password User=root Password="a" #Maximum Slave Time Delay Delay="60" #File Path to store error and email the same Log_File=/tmp/replicationcheck.txt #Email Settings Subject="$Server Replication Error" Sender_Name=TestServer Recipients="[email protected]" #Mail Alert Function mailalert(){ sendmail -F $Sender_Name -it <<END_MESSAGE To: $Recipients Subject: $Subject $Message_Replication_Error `/bin/cat $Log_File` END_MESSAGE } #Show Slave Status Show_Slave_Status=`echo "show slave status \G;" | mysql -u $User -p$Password 2>&1` #Getting list of queries in mysql $Show_Slave_Status | grep "Last_" > $Log_File #Check if slave running $Show_Slave_Status | grep "Slave_IO_Running: No" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is No." mailalert exit 1 else $Show_Slave_Status | grep "Slave_IO_Running: Connecting" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is Connecting." mailalert exit 1 fi fi #Check if replication delayed Seconds_Behind_Master=$Show_Slave_Status | grep "Seconds_Behind_Master" | awk -F": " {' print $2 '} if [ "$Seconds_Behind_Master" -ge "$Delay" ]; then Message_Replication_Error="Replication Delayed by $Seconds_Behind_Master." mailalert else if [ "$Seconds_Behind_Master" = "NULL" ]; then Message_Replication_Error="$Server Replication error please check. The Seconds_Behind_Master state is NULL." mailalert fi fi

    Read the article

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Java Desktop Application For Network users

    - by Motasem Abu Aker
    I'm developing a desktop application using Java. My application will run in a network environment where multiple users will access the same database through the application. There will be basic CRUD opreations (Insert, Update, Delete, & select), which means there will be chances of deadlock, or two users trying to update the same record at same time. I'm using the following Java Swing for Clients (MVC). MySQL Server for database (InnODB). Java Web start. Now, MySQL is centralized on the network, and all of the clients connect to it. The Application for ERP Purpose. I searched the internet to find a very good solution to ensure data integrity & to make sure that when updating one record from one client, other clients are aware of it. I read about Socket-server-client & RESTful web services. I don't want to go web application & don't want to use any extra libraries. So how can I handle this scenario: If User A updates a record: Is there a way to update User B's screen with the new value? If user A starts updating a record, how can I prevent other users from attempting to update the same record?

    Read the article

  • What is the best approach for database design with lots of columns?

    - by Pratyush
    I am writing a query based financial application. It lets the user to write complicated equations (much like WHERE part of an SQL query) and find companies matching those criteria. For the above, I currently have more than 500 columns in the database table (each column representing a financial field). Example of Columns are: company_name, sales_annual_00, sales_annual_01, sales_annual_02, sales_annual_03, sales_annual_04, protit_annual_00, profit_annual1...(over 500 such columns). The number of rows is around 5000. Going forward, I would like to further increase the number of columns/financial-fields. For the above I would like to get help regarding: 1) What is the best database design approach? Is it ok to have these many number of columns? 2) How can it be normalized? (User can use any of these fields in search criteria). 3) Is it ok to stick with MySQL, or modern document based databases like MongoDB should be better for it? P.S. (Update): I have been using MySQL till now and a running example of the usage is at: http://screener.in/companies/89/Formula-- In above there around 500 fields/columns to create your query on, however, I seek to increase that number to much more in future.

    Read the article

  • When is it better to offload work to the RDBMS rather than to do it in code?

    - by GeminiDomino
    Okay, I'll cop to it: I'm a better coder than I am at databases, and I'm wondering where thoughts on "best practices" lie on the subject of doing "simple" calculations in the SQL query vs. in the code, such as this MySQL example (I didn't write it, I just have to maintain it!) -- This returns the username, and the users age as of the last event. SELECT u.username as user, IF ((DAY(max(e.date)) - DAY(u.DOB)) &lt; 0 , TRUNCATE(((((YEAR(max(e.date))*12)+MONTH(max(e.date))) -((YEAR(u.DOB)*12)+MONTH(u.DOB)))-1)/12, 0), TRUNCATE((((YEAR(max(e.date))*12)+MONTH(max(e.date))) - ((YEAR(u.DOB)*12)+MONTH(u.DOB)))/12, 0)) AS age FROM users as u JOIN events as e ON u.id = e.uid ... Compared to doing the "heavy" lifting in code: Query: SELECT u.username, u.DOB as dob, e.event_date as edate FROM users as u JOIN events as e ON u.id = e.uid code: function ageAsOfDate($birth, $aod) { //expects dates in mysql Y-m-d format... list($by,$bm,$bd) = explode('-',$birth); list($ay,$am,$ad) = explode('-',$aod); //Insert Calculations here ... return $Dy; //Difference in years } echo "Hey! ". $row['user'] ." was ". ageAsOfDate($row['dob'], $row['edate']) . " when we last saw him."; I'm pretty sure in a simple case like this it wouldn't make much difference (other than the creeping feeling of horror when I have to make changes to queries like the first one), but I think it makes it clearer what I'm looking for. Thanks!

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >