Search Results

Search found 23821 results on 953 pages for 'mysql group by'.

Page 33/953 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • combining two select statements to return one result

    - by DalivDali
    I need to combine the results for two select queries from two view tables, from which I am performing calculations. Perhaps there is an easier way to perform a query using if...else - any pointers? Essentially I need to divide everything by 'ar.time_ratio' under the condition in sql query 1, and ignore that for query 2. SELECT gs.traffic_date, gs.domain_group, gs.clicks/ar.time_ratio as 'Scaled_clicks', gs.visitors/ar.time_ratio as 'scaled_visitors', gs.revenue/ar.time_ratio as 'scaled_revenue', (gs.revenue/gs.clicks)/ar.time_ratio as 'scaled_average_cpc', (gs.clicks)/(gs.visitors)/ar.time_ratio as 'scaled_ctr', gs.average_rpm/ar.time_ratio as 'scaled_rpm', (((gs.revenue)/(gs.visitors))/ar.time_ratio)*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar WHERE ar.group_id=gs.domain_group and SELECT gs.traffic_date, gs.domain_group, gs.clicks, gs.visitors, gs.revenue, (gs.revenue/gs.clicks) as 'average_cpc', (gs.clicks)/(gs.visitors) as 'average_ctr', gs.average_rpm, ((gs.revenue)/(gs.visitors))*1000 as "Ecpm" FROM group_stats gs, v_active_ratio ar where not ar.group_id=gs.domain_group

    Read the article

  • Incompatible group permissions in Linux - Is it a bug?

    - by Sachin
    I am on Ubuntu 11.04. I am creating another user and placing an existing user in the group of other user, hoping to write in the home directory of other user. # uname -a Linux vini 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:18:14 UTC 2011 i686 athlon i386 GNU/Linux # whoami sachin # su root # useradd -m -U foo // create user foo # usermod -a -G foo sachin // add user `sachin' to group `foo' # chmod 770 /home/foo/ # exit # whoami sachin # cd /home/foo/ bash: cd: /home/foo/: Permission denied # groups sachin sachin : sachin foo This is totally weird. Though user sachin is in group foo, and group bits for /home/foo/ is set to rwx, sachin can't chdir to /home/foo/. I am not able to understand this. But, if at the exit step, I switch to sachin user from root, this is what happens: # uname -a Linux vini 2.6.38-11-generic #50-Ubuntu SMP Mon Sep 12 21:18:14 UTC 2011 i686 athlon i386 GNU/Linux # whoami sachin # su root # useradd -m -U foo // create user foo # usermod -a -G foo sachin // add user `sachin' to group `foo' # chmod 770 /home/foo/ # su sachin # whoami sachin # cd /home/foo/ # ls examples.desktop Now, whatever is happening here is totally incomprehensible. Does su sachin inherits some permissions from the root user at this step? Any explanations would be much appreciated.

    Read the article

  • Date range/query problem..

    - by Simon
    Am hoping someone can help me out a bit with date ranges... I have a table with 3 fields id, datestart, dateend I need to query this to find out if a pair of dates from a form are conflicting i.e table entry 1, 2010-12-01, 2010-12-09 from the form 2010-12-08, 20-12-15 select id from date_table where '2010-12-02' between datestart and dateend; That returns me the id that I want, but what I would like to do is to take the date range from the form and do a query similar to what I have got that will take both form dates 2010-12-08, 20-12-15 and query the db to ensure that there is no conflicting date ranges in the table. Am sat scratching my head with the problem... TIA

    Read the article

  • Samba: map domain group to local one

    - by user285467
    I have a problem with mapping pure domain group to one existing on UNIX system. When I map NT domain account by default samba picks local SID - one that can be acquired via the command; net getlocalsid Instead of SID that comes from domain; net getdomainsid This is the behavior that I do not understand. I can explicitly set the SID to the domain one. E.g.: net groupmap add sid=[DOMAIN SID]-[RID] ntgroup=[DOMAIN group] unixgroup=[UNIX group] type=l However the command getent group | grep 'DOMAIN group indicates this group to be domain one - GID created in accordance to RID backend in use, not the GID of 'UNIX group' as expected. Worth to mention I use the winbind. Strange thing is that I already have such mapping in place for other 'DOMAIN group2' that getent group reports with GID of local UNIX group with all members of the 'DOMAIN group2'. Now the question is how to populate such behavior for other of my groups???

    Read the article

  • mysql_connect VS mysql_pconnect

    - by rogeriopvl
    I have this doubt, I've searched the web and the answers seem to be diversified. Is it better to use mysql_pconnect over mysql_connect when connecting to a database via PHP? I read that pconnect scales much better, but on the other hand, being a persistent connection... having 10 000 connections at the same time, all persistent, doesn't seem scalable to me. Thanks in advance.

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • MySQL – Export the Resultset to CSV file

    - by Pinal Dave
    In SQL Server, you can use BCP command to export the result set to a csv file. In MySQL too, You can export data from a table or result set as a csv file in many methods. Here are two methods. Method 1 : Make use of Work Bench If you are using Work Bench as a querying tool, you can make use of it’s Export option in the result window. Run the following code in Work Bench SELECT db_names FROM mysql_testing; The result will be shown in the result windows. There is an option called “File”. Click on it and it will prompt you a window to save the result set (Screen shot attached to show how file option can be used). Choose the directory and type out the name of the file. Method 2 : Make use of OUTFILE command You can do the export using a query with OUTFILE command as shown below SELECT db_names FROM mysql_testing INTO OUTFILE 'C:/testing.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY '\r\n'; After the execution of the above code, you can find a file named testing.csv in C drive of the server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL Tagged: CSV

    Read the article

  • MySQL – Introduction to CONCAT and CONCAT_WS functions

    - by Pinal Dave
    MySQL supports two types of concatenation functions. They are CONCAT and CONCAT_WS CONCAT function just concats all the argument values as such SELECT CONCAT('Television','Mobile','Furniture'); The above code returns the following TelevisionMobileFurniture If you want to concatenate them with a comma, either you need to specify the comma at the end of each value, or pass comma as an argument along with the values SELECT CONCAT('Television,','Mobile,','Furniture'); SELECT CONCAT('Television',',','Mobile',',','Furniture'); Both the above return the following Television,Mobile,Furniture However you can omit the extra work by using CONCAT_WS function. It stands for Concatenate with separator. This is very similar to CONCAT function, but accepts separator as the first argument. SELECT CONCAT_WS(',','Television','Mobile','Furniture'); The result is Television,Mobile,Furniture If you want pipeline as a separator, you can use SELECT CONCAT_WS('|','Television','Mobile','Furniture'); The result is Television|Mobile|Furniture So CONCAT_WS is very flexible in concatenating values along with separate. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Software restriction policies set in the registry don't update Local Group Policy

    - by Jon Rhoades
    The joys of a Samba domain... First off Domain Group policy can't be used until Samba 4 arrives. We need to setup Software Restriction Policies (SRPs) on most of the computers in our Samba domain and I would dearly like to automate this. (We are moving away from just disabling the Windows installer). The traditional way is to set SRPs using Local Group Policy (LGP) Computer Conf-Windows Settings-SRP but this involves visiting every machine as it can't be set using in NTConfig.pol. It is possible to attempt to create the SRPs directly in the registry: [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Safer\CodeIdentifiers\262144\Paths\{30628f61-eb47-4d87-823b-6683a09eda87}] "LastModified"=hex(b):40,a2,94,09,b5,5d,ca,01 "Description"="" "SaferFlags"=dword:00000000 "ItemData"="C:\\location\\subfolder" SaferFlags DWORD seems to be what turns it on or off, but although this seems to work it does not update the Local Group Policy - SRPs still show as "No SRPs Defined". Where does the LGP store this setting - is it even in the registry and more importantly - Is there a cleverer way of setting up SRPs?

    Read the article

  • PHP Can't connect to MySQL on the production server

    - by Jairo Santos
    I'm having problems with connections with MySQL through PHP script. The MySQL user is root and I added GRANTS to root@'%' so I can connect from anywhere. Lets assume my MySQL host as "bigboy.com.br" The funny part is, from my local machine, on my test server, the script can connect to the MySQL server normally. But on the dedicated server where MySQL is running, the same PHP script gives me "Access denied for 'root'@'bigboy.com.br'" error.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • Symlink error when installing MySQL via Homebrew

    - by Asad Syed
    Trying to install MySQL via Homebrew. The install seems to work fine but i get an error: "Error: The linking step did not complete successfully The formula built, but is not symlinked into /usr/local You can try again using `brew link mysql'" Naturally, after this I ran: brew link mysql Which spat out: Error: Could not symlink file: /usr/local/Cellar/mysql/5.5.20/include/typelib.h /usr/local/include is not writable. You should change its permissions. So I ran it with sudo and got a "cowardly refusing to brew link mysql".

    Read the article

  • Change the mysql.sock that the system PHP uses

    - by Mark P.
    In the past I had installed MySQL as part of XAMPP in /Applications/XAMPP/ and the PHP that the system used (e.g. from command line) used /Applications/XAMPP/xamppfiles/var/mysql/mysql.sock to connect with MySQL. Now I have installed MySQL 5 with macports and I want my system PHP to be able to use that. I think I must set PHP to use /opt/local/var/run/mysql5/mysqld.sock but where do I set this? Is there a php.ini for CLI PHP?

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Map the 'Domain Admins' group into the local Ubuntu 'admin' group

    - by Miquella
    I have configured an Ubuntu 10.04 box to connect to our domain (Windows 2003 R2) using Likewise-Open. All the users can authenticate as expected. However, the domain administrators do not have administrative privileges to the machine. After working at this for a few hours, I've determined what I think may be a solution: if I map the 'Domain Admins' group from the Active Directory into the local 'admin' group, the users should get the appropriate permissions. But I have no idea how to do that. Does this even sound like the correct approach? A similar question was asked on StackOverflow and then migrated here. But it was never answered as it was recommended to be asked here instead. Thanks in advance!

    Read the article

  • Not able to installl mysql-server ubuntu

    - by makdotgnu
    I'm not able to install mysql-server on ubuntu 10.04 I had installed mysql but it was giving this error ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) So I removed mysql-server completely from synaptic but after this i'm not able to reinstall it. When I try to reinstall it synaptic fridges. how to do remove each file of mysql and install it ?

    Read the article

  • mysql server overloading without error

    - by beny
    Hi, I have a serious problem on my server with MySQL server, it overload itself without any error in /var/log/mysqld What steps should I do to find out the problem ? my.cnf is [mysqld] set-variable=local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords=1 skip-bdb set-variable = innodb_buffer_pool_size=256M set-variable = innodb_additional_mem_pool_size=20M set-variable = innodb_log_file_size=128M set-variable = innodb_log_buffer_size=8M innodb_data_file_path = ibdata1:1000M:autoextend Please help, thx

    Read the article

  • Cloning a VM to add a new MySQL Slave

    - by Ben Holness
    I am in the process of adding a new slave to a replicated mysql setup. All of the slave nodes are virtual machines. If I clone one of the nodes to a new VM, then start it with no networking, stop mysql, change the server-id in my.cnf to a new id and then restart mysql and networking, will it all work correctly, or will mysql get confused because it used to be a different server id? OS: Ubuntu 10.10 VM Platform: VMWare 5 MySQL : Server version: 5.1.49-1ubuntu8.1-log (Ubuntu)

    Read the article

  • What is the fastest method to restore MySQL replication?

    - by dwhere
    I have a MySQL (5.1) master-slave replication pair and replication to the slave has failed. It failed because the master ran out of disk space and the relay-logs became corrupt. The master is now back online and working properly. Since there is this error in the log the slave process can't simply be restarted. The server has a single 40GB InnoDB database and I would like to know what is the fastest method for getting the slave back in sync to minimize downtime.

    Read the article

  • mysql on ubunto 4 is not running and is not remotely accessible

    - by user628119
    Currently i installed ubunot on ubunto 4.4, and using following commands i can see mysql, mysqld running ps -ef | grep mysql ps -ef | grep mysqld but when I run, netstat i don't see mysql and 3306 anywhere. in my.cnf file, i have given my ip and port is 3306. also when i run this command sudo netstat -tap | grep mysql i don't see anything and commands I needed to run the mysql 5 on port 3306 and on ip=x.x.x.x for remotely accessible Looking forward to your reply

    Read the article

  • Group policy reset upon restart?

    - by sc_ray
    Is it possible for a group policy to revert back to its original state upon server restart? Our servers are hosted as a Virtual Machine on the rack. We had to restart our server for some reason and all of a sudden we cannot remote desktop into the server any more. Pinging the server succeeds but RDPing into it fails. My assumption is that the group policy has reverted back preventing any remote desktop connections from taking place. Is that a possibility? Since the network is managed by another group, we don't have the authority to physically look into what's going on with this particular VM. Can somebody suggest some ideas? Thanks

    Read the article

  • Enable group policy for everything but the SBS?

    - by Jerry Dodge
    I have created a new group policy to disable IPv6 on all machines. There is only the one default OU, no special configuration. However, this policy shall not apply to the SBS its self (nor the other DC at another location on a different subnet) because those machines do depend on IPv6. All the rest do not. I did see a recommendation to create a new OU and put that machine under it, but many other comments say that is extremely messy and not recommended - makes it high maintenance when it comes to changing other group policies. How can I apply this single group policy to every machine except for the domain controllers? PS - Yes, I understand IPv6 will soon be the new standard, but until then, we have no intention to implement it, and it in fact is causing us many issues when enabled.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >