Search Results

Search found 5147 results on 206 pages for '3ds max'.

Page 39/206 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Can't burn 8.1G iso onto 8.4GB DVD - "Media does not have enough free space"

    - by Max Williams
    I'm trying to burn a dvd on a mac with an external (firewire-connected) dvd drive. I'm checking the size of the iso thus: DVD-4:dvd_files macbook$ ls -l /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8700884992 Aug 22 10:57 /tmp/hybrid.iso DVD-4:dvd_files macbook$ ls -lh /tmp/hybrid.iso -rw-r--r-- 1 macbook wheel 8.1G Aug 22 10:57 /tmp/hybrid.iso The "human-readable" size is 8.1 Gig but when i try to burn, onto an 8.4G dual-layer dvd, it says "Media does not have enough free space" The definition of a "Gigabyte" according to Wikipedia is 1 billion bytes, so the iso size should actually be 8.7 Gig according to this definition, in which case the disk definitely isn't big enough, and it's just that the -h option to ls is misleading. Is the discrepancy just due to the ls command using a different definition of "G" (eg 1024 Meg aka 1.07 Gig? This comes out as 8.103 which fits what ls is displaying)

    Read the article

  • Do large corporations block jQuery content on web pages?

    - by Max Vernon
    We are currently redesigning our website. The company we've hired to do the redesign is advocating the use of jQuery to render the pages dynamically. Our SEO specialist is under the impression that many larger corporations may have jQuery blocked in their proxies to prevent their users from visiting sites like Facebook. Is this something you are aware of? Forgive me if this is off topic for SF.SE!

    Read the article

  • customer wont provide ssh access - ftp only

    - by Max
    Eh, here is my problem: I am working in a webdevelopment agency (thats a problem but not the real problem, read on). Most of the time I choose the live server myself when creating a new website project. But now the customer already has a "server" (10 GB on a cheapo host!) and the "admin" refuses to give me ssh access to it. But I need to access the server via shell because many files will be transported (need to be able to upload and extract a tar) and I need to insert or create mysql dumps via command line. He argues FTP and phpmyadmin should be enough... as far as I know the webspace was just ordered to host the website, so no security critical apps are running there. How can I either convince the admin to give me the ssh login or tell management that we need our own server? Anyone with similiar experiences? This is really annoying as this is a very small project that should be done fast and now one has to fight in order to just get the work done...

    Read the article

  • Am I safe on Windows if I continue like this?

    - by max
    Of all the available tons of anti-malware software for Windows all over the internet, I've never used any paid solution(I am a student, I have no money). Since the last 10 years, my computers running Windows have never been hacked/compromised or infected so badly that I had to reformat them(of course I did reformat them for other reasons). The only program I have for security is Avast Home Edition, which is free, installed on my computers. It has never caused any problems; always detected malware, updated automatically, has an option to sandbox programs and everything else I need. Even if I got infected, I just did a boot-time scan with it, downloaded and ran Malwarebytes, scanned Autoruns logs, checked running processes with Process Explorer and did some other things and made sure I cleaned my computer. I am quite experienced and I've always taken basic precautions like not clicking suspicious executables, not going to sites which are suspicious according to WOT, and all that blah. But recently I've been doing more and more online transactions and since its 2012 now, I'm doubtful whether I need more security or not. Have I been just lucky, or do my computing habits obviate the need to use any more(or paid) security software?

    Read the article

  • How to properly secure Windows Server 2008 R2 that will host SQL Server 2012?

    - by Max
    I am a .net programmer trying to create this setup: I want this server to be inaccessible through DMZ accept for IPSEC connections, and to also have a private network which will be accessible through another windows 2008 server which will host vpn. That is how our windows 2003 infrastructure works and I am trying to do the same with 2008 servers, are there any guides or documentations that have this scenario?

    Read the article

  • Open Office: How to disable image link updates

    - by Max Kielland
    I'm writing a user manual to a card game and there is a looot of linked images. Open Office is working so slow because every time I flip to a page with linked images it starts to update them. Is it possible to tell Open Office to NOT update the links until I tell it to do so? I would like it to display the same snapshot it showed the last time I initiated link update. I'm using Open Office v3.3.0 // Thank you.

    Read the article

  • mongodb replication: no primary elected

    - by Max
    I have three servers with mongod installed on it running as a replication set. Suddenly the two secondories became unavailable (the mongod process died) - I think because they were too stale. The problem is that the original PRIMARY is now the SECONDARY and my application doesn't work because it can't connect to a PRIMARY. I mean, in which way does that help me? If the replica set can't do failover?! Am I missing something? Furhtermore I am asking myself why did the SECONDARIES die / why are they too stale? What can I do about it? FYI: My database is quite big (40GB on disk).

    Read the article

  • IIS_IUSRS cannot access files uploaded and created by Network Service - error 401.3

    - by Max
    Let me rephrase my question as I investigated further: The problem: I have a php script that is used to upload images on my windows webserver 2008. The files are created in the correct directory. The are created and owned by the user Network Service. Network Service has full access to the uploaded file. As soon as I try to access the uploaded file (mostly an image) via HTTP, I get an 401.3 not authorized error. Now, if I right-click on the not accessible image and grant IIS_IUSRS group read permissions via the security tab, the image can be accessed! By default IIS_IUSRS has NO access at all for the uploaded file. The directory containing the image files has the correct access rights set. But each file that is new uploaded to the directory is permitted for IIS_IUSRS. The question: How can I grant IIS_IUSRS by default access to the newly uploaded file? The appPool of the website has its identity set to its default, I also tried setting it to "networkIdentity" or so, but that did not work either.

    Read the article

  • How can I change the default program installation directory in Windows 7?

    - by Max
    Windows 7 is installed on my C drive, which is quite small. I am very tired of instructing new programs to put their files on my larger D drive during installation; I would like to change the default drive. This article says that you can use a registry hack, but I am giving Microsoft the benefit of the doubt and naively assuming that a configuration option exists somewhere. It's 2010... do I really have to hack my registry to make a simple tweak like this? Also, there's a ServerFault question that explains how to move the "Users" directory and create a symlink, which could also work. However, at the moment I have some apps in C:\Program Files, some apps in C:\Program Files (x86), and some apps in the corresponding folders on D:\, so it would be a hassle. Also, my small OS boot drive is a 10k RPM WD Raptor, and I feel like that probably gives a speed boost to apps installed on it that need to read & write to their directories a bunch. I wonder if it actually matters.

    Read the article

  • What is a good topic for a research paper on modern computer architecture?

    - by Max Schmeling
    This may not be the right place for this, but I wanted to get this question in front of some of the brightest people on the internet, so I thought I'd give it a shot. I have to write a research paper on some modern aspect of computer architecture. The subject is really not very restrictive; pretty much any recent development in computer hardware will work. I want to write it over something really interesting, but I don't have a lot of good ideas. What would make a really interesting paper?

    Read the article

  • How do I create yum repo file?

    - by max
    I know there is a previously asked question, but I still have some doubts so asking again. How do I create a yum repo file? I know that in the /etc/yum.repos.d/ I have to create .repo file. Below is the pattern: 1 [name ] 2 name= 3 baseurl= 4 enabled=1 5 gpgcheck=1 6 gpgkey= Here in the baseurl which link should I give? I'm fully confused about this. How do I get that baseurl link? Can anyone please explain to me clearly? I am using CentOS 6.2.

    Read the article

  • How do I turn a Wi-Fi "hotspot" into a local wired network?

    - by Max Schmeling
    Here's the situation: In a remote "office" I have a computer with no network connection, that I need to network with when I'm at this remote office. There is a wireless network where this computer is, but no wireless adapter in the computer. I have a laptop running Windows 7 that can connect to the wireless, and the computer is running Windows Vista. What is the best way to get them both connected? I know I can buy a USB wireless adapter or something for the computer, but is there an easy way to do it with what I've got?

    Read the article

  • accessing external mysql server through "ssh tunnel" - any drawbacks?

    - by Max
    In an upcoming project I have a two server setup: one is the application server and another, already existing runs the mysql server with databases I need to access. I contacted the server admin of the mysql server and the only way I can access the remote mysql databases is via "SSH tunnel". I have never done this before and never heard of it so far, so my question, are there any drawbacks, e. g. performance wise? Isnt it rather slow compared to directly accessing the mysql server on its default port?

    Read the article

  • point subdomain to (sub-) directory on IIS 7

    - by Max
    I have quite a newbie question, but here it is anyway: one of our customers has a domain, e. g. examplecustomer.com which points to the customers website. This server is a apache webserver. Now we have another server using IIS 7, where some .NET web app will be running. This .NET app is in a subdirectory of the windows webserver, e. g. C:\inetpub\wwwroot\my_app\ What I would like to have: a subdomain like app.examplecustomer.com points to C:\inetpub\wwwroot\my_app\ (no redirect or so, app.examplecustomer.com is the domain that the web app is using). How can I setup the windows webserver to work that way? It should still be possible to host other apps on that server, too. Like: anotherapp.examplecustomer.com goes to C:\inetpub\wwwroot\my_anotherapp\ etc.

    Read the article

  • Convert Java program to C

    - by imicrothinking
    I need a bit of guidance with writing a C program...a bit of quick background as to my level, I've programmed in Java previously, but this is my first time programming in C, and we've been tasked to translate a word count program from Java to C that consists of the following: Read a file from memory Count the words in the file For each occurrence of a unique word, keep a word counter variable Print out the top ten most frequent words and their corresponding occurrences Here's the source program in Java: package lab0; import java.io.File; import java.io.FileReader; import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; public class WordCount { private ArrayList<WordCountNode> outputlist = null; public WordCount(){ this.outputlist = new ArrayList<WordCountNode>(); } /** * Read the file into memory. * * @param filename name of the file. * @return content of the file. * @throws Exception if the file is too large or other file related exception. */ public char[] readFile(String filename) throws Exception{ char [] result = null; File file = new File(filename); long size = file.length(); if (size > Integer.MAX_VALUE){ throw new Exception("File is too large"); } result = new char[(int)size]; FileReader reader = new FileReader(file); int len, offset = 0, size2read = (int)size; while(size2read > 0){ len = reader.read(result, offset, size2read); if(len == -1) break; size2read -= len; offset += len; } return result; } /** * Make article word by word. * * @param article the content of file to be counted. * @return string contains only letters and "'". */ private enum SPLIT_STATE {IN_WORD, NOT_IN_WORD}; /** * Go through article, find all the words and add to output list * with their count. * * @param article the content of the file to be counted. * @return words in the file and their counts. */ public ArrayList<WordCountNode> countWords(char[] article){ SPLIT_STATE state = SPLIT_STATE.NOT_IN_WORD; if(null == article) return null; char curr_ltr; int curr_start = 0; for(int i = 0; i < article.length; i++){ curr_ltr = Character.toUpperCase( article[i]); if(state == SPLIT_STATE.IN_WORD){ article[i] = curr_ltr; if ((curr_ltr < 'A' || curr_ltr > 'Z') && curr_ltr != '\'') { article[i] = ' '; //printf("\nthe word is %s\n\n",curr_start); if(i - curr_start < 0){ System.out.println("i = " + i + " curr_start = " + curr_start); } addWord(new String(article, curr_start, i-curr_start)); state = SPLIT_STATE.NOT_IN_WORD; } }else{ if (curr_ltr >= 'A' && curr_ltr <= 'Z') { curr_start = i; article[i] = curr_ltr; state = SPLIT_STATE.IN_WORD; } } } return outputlist; } /** * Add the word to output list. */ public void addWord(String word){ int pos = dobsearch(word); if(pos >= outputlist.size()){ outputlist.add(new WordCountNode(1L, word)); }else{ WordCountNode tmp = outputlist.get(pos); if(tmp.getWord().compareTo(word) == 0){ tmp.setCount(tmp.getCount() + 1); }else{ outputlist.add(pos, new WordCountNode(1L, word)); } } } /** * Search the output list and return the position to put word. * @param word is the word to be put into output list. * @return position in the output list to insert the word. */ public int dobsearch(String word){ int cmp, high = outputlist.size(), low = -1, next; // Binary search the array to find the key while (high - low > 1) { next = (high + low) / 2; // all in upper case cmp = word.compareTo((outputlist.get(next)).getWord()); if (cmp == 0) return next; else if (cmp < 0) high = next; else low = next; } return high; } public static void main(String args[]){ // handle input if (args.length == 0){ System.out.println("USAGE: WordCount <filename> [Top # of results to display]\n"); System.exit(1); } String filename = args[0]; int dispnum; try{ dispnum = Integer.parseInt(args[1]); }catch(Exception e){ dispnum = 10; } long start_time = Calendar.getInstance().getTimeInMillis(); WordCount wordcount = new WordCount(); System.out.println("Wordcount: Running..."); // read file char[] input = null; try { input = wordcount.readFile(filename); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } // count all word ArrayList<WordCountNode> result = wordcount.countWords(input); long end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordcount: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); System.out.println("wordsort: running ..."); start_time = Calendar.getInstance().getTimeInMillis(); Collections.sort(result); end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordsort: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); Collections.reverse(result); System.out.println("\nresults (TOP "+ dispnum +" from "+ result.size() +"):\n" ); // print out result String str ; for (int i = 0; i < result.size() && i < dispnum; i++){ if(result.get(i).getWord().length() > 15) str = result.get(i).getWord().substring(0, 14); else str = result.get(i).getWord(); System.out.println(str + " - " + result.get(i).getCount()); } } public class WordCountNode implements Comparable{ private String word; private long count; public WordCountNode(long count, String word){ this.count = count; this.word = word; } public String getWord() { return word; } public void setWord(String word) { this.word = word; } public long getCount() { return count; } public void setCount(long count) { this.count = count; } public int compareTo(Object arg0) { // TODO Auto-generated method stub WordCountNode obj = (WordCountNode)arg0; if( count - obj.getCount() < 0) return -1; else if( count - obj.getCount() == 0) return 0; else return 1; } } } Here's my attempt (so far) in C: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> // Read in a file FILE *readFile (char filename[]) { FILE *inputFile; inputFile = fopen (filename, "r"); if (inputFile == NULL) { printf ("File could not be opened.\n"); exit (EXIT_FAILURE); } return inputFile; } // Return number of words in an array int wordCount (FILE *filePointer, char filename[]) {//, char *words[]) { // count words int count = 0; char temp; while ((temp = getc(filePointer)) != EOF) { //printf ("%c", temp); if ((temp == ' ' || temp == '\n') && (temp != '\'')) count++; } count += 1; // counting method uses space AFTER last character in word - the last space // of the last character isn't counted - off by one error // close file fclose (filePointer); return count; } // Print out the frequencies of the 10 most frequent words in the console int main (int argc, char *argv[]) { /* Step 1: Read in file and check for errors */ FILE *filePointer; filePointer = readFile (argv[1]); /* Step 2: Do a word count to prep for array size */ int count = wordCount (filePointer, argv[1]); printf ("Number of words is: %i\n", count); /* Step 3: Create a 2D array to store words in the file */ // open file to reset marker to beginning of file filePointer = fopen (argv[1], "r"); // store words in character array (each element in array = consecutive word) char allWords[count][100]; // 100 is an arbitrary size - max length of word int i,j; char temp; for (i = 0; i < count; i++) { for (j = 0; j < 100; j++) { // labels are used with goto statements, not loops in C temp = getc(filePointer); if ((temp == ' ' || temp == '\n' || temp == EOF) && (temp != '\'') ) { allWords[i][j] = '\0'; break; } else { allWords[i][j] = temp; } printf ("%c", allWords[i][j]); } printf ("\n"); } // close file fclose (filePointer); /* Step 4: Use a simple selection sort algorithm to sort 2D char array */ // PStep 1: Compare two char arrays, and if // (a) c1 > c2, return 2 // (b) c1 == c2, return 1 // (c) c1 < c2, return 0 qsort(allWords, count, sizeof(char[][]), pstrcmp); /* int k = 0, l = 0, m = 0; char currentMax, comparedElement; int max; // the largest element in the current 2D array int elementToSort = 0; // elementToSort determines the element to swap with starting from the left // Outer a iterates through number of swaps needed for (k = 0; k < count - 1; k++) { // times of swaps max = k; // max element set to k // Inner b iterates through successive elements to fish out the largest element for (m = k + 1; m < count - k; m++) { currentMax = allWords[k][l]; comparedElement = allWords[m][l]; // Inner c iterates through successive chars to set the max vars to the largest for (l = 0; (currentMax != '\0' || comparedElement != '\0'); l++) { if (currentMax > comparedElement) break; else if (currentMax < comparedElement) { max = m; currentMax = allWords[m][l]; break; } else if (currentMax == comparedElement) continue; } } // After max (count and string) is determined, perform swap with temp variable char swapTemp[1][20]; int y = 0; do { swapTemp[0][y] = allWords[elementToSort][y]; allWords[elementToSort][y] = allWords[max][y]; allWords[max][y] = swapTemp[0][y]; } while (swapTemp[0][y++] != '\0'); elementToSort++; } */ int a, b; for (a = 0; a < count; a++) { for (b = 0; (temp = allWords[a][b]) != '\0'; b++) { printf ("%c", temp); } printf ("\n"); } // Copy rows to different array and print results /* char arrayCopy [count][20]; int ac, ad; char tempa; for (ac = 0; ac < count; ac++) { for (ad = 0; (tempa = allWords[ac][ad]) != '\0'; ad++) { arrayCopy[ac][ad] = tempa; printf("%c", arrayCopy[ac][ad]); } printf("\n"); } */ /* Step 5: Create two additional arrays: (a) One in which each element contains unique words from char array (b) One which holds the count for the corresponding word in the other array */ /* Step 6: Sort the count array in decreasing order, and print the corresponding array element as well as word count in the console */ return 0; } // Perform housekeeping tasks like freeing up memory and closing file I'm really stuck on the selection sort algorithm. I'm currently using 2D arrays to represent strings, and that worked out fine, but when it came to sorting, using three level nested loops didn't seem to work, I tried to use qsort instead, but I don't fully understand that function as well. Constructive feedback and criticism greatly welcome (...and needed)!

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Ray-box Intersection Theory

    - by Myx
    Hello: I wish to determine the intersection point between a ray and a box. The box is defined by its min 3D coordinate and max 3D coordinate and the ray is defined by its origin and the direction to which it points. Currently, I am forming a plane for each face of the box and I'm intersecting the ray with the plane. If the ray intersects the plane, then I check whether or not the intersection point is actually on the surface of the box. If so, I check whether it is the closest intersection for this ray and I return the closest intersection. The way I check whether the plane-intersection point is on the box surface itself is through a function bool PointOnBoxFace(R3Point point, R3Point corner1, R3Point corner2) { double min_x = min(corner1.X(), corner2.X()); double max_x = max(corner1.X(), corner2.X()); double min_y = min(corner1.Y(), corner2.Y()); double max_y = max(corner1.Y(), corner2.Y()); double min_z = min(corner1.Z(), corner2.Z()); double max_z = max(corner1.Z(), corner2.Z()); if(point.X() >= min_x && point.X() <= max_x && point.Y() >= min_y && point.Y() <= max_y && point.Z() >= min_z && point.Z() <= max_z) return true; return false; } where corner1 is one corner of the rectangle for that box face and corner2 is the opposite corner. My implementation works most of the time but sometimes it gives me the wrong intersection. I was wondering if the way I'm checking whether the intersection point is on the box is correct or if I should use some other algorithm. Thanks.

    Read the article

  • Pseudocode help

    - by vatsag
    Hello All I've been banging my head over few hours for realising this particular logic My task is to have a masterlist ListM elements of which shall be list(again) in the form of ListA(type A) and ListB(type B) and ListC(type C)(some generic type). Also the lists A and ListB can contain elements of type C(some generic type). Coming to the preconditions 1. ListA can contain only 50 elements of type 'A', rest of the elements can be of generic type C 2. ListB can contain only 50 elements of type 'B', rest of the elements can be of generic type C 3. Also the lists(ListA or ListB) can take at max of 100 elements. 4. ListC can contain elements of generic type C and can take max of 100 elements For ex: Now, If i have 200 elements of type A and 200 elements of type B and 600 elements of type C then i should be able to get 4 Lists(50 elements in each list) of type...ListA (because each ListA can contain at max of 50 elements of type A only, rem 50 is typeC) 4 Lists(50 elements in each list) of type...ListB (because each ListA can contain at max of 50 elements of type B only, rem 50 is typeC) 2 Lists(50 elements in each list) of type...ListC i shall be glad to explain it again if my explanation is quite confusing. Can anyone suggest a better way for implementing such a requirement in the form of a pseudocode ? Thanks in advance VATSAG

    Read the article

  • SQL Server PIVOT with multiple X-axis columns

    - by HeavenCore
    Take the following example data: Payroll Forname Surname Month Year Amount 0000001 James Bond 3 2011 144.00 0000001 James Bond 6 2012 672.00 0000001 James Bond 7 2012 240.00 0000001 James Bond 8 2012 1744.50 0000002 Elvis Presley 3 2011 1491.00 0000002 Elvis Presley 6 2012 189.00 0000002 Elvis Presley 7 2012 1816.50 0000002 Elvis Presley 8 2012 1383.00 How would i PIVOT this on the Year + Month (eg: 201210) but preserve Payroll, Forename & Surname as seperate columns, for example, the above would become: Payroll Forename Surname 201103 201206 201207 201208 0000001 James Bond 144.00 672.00 240.00 1744.50 0000002 Elvis Presley 1491.00 189.00 1816.50 1383.00 I'm assuming that because the Year + Month names can change then i will need to employ dynamic SQL + PIVOT - i had a go but couldnt even get the code to parse, nevermind run - any help would be most appreciated! Edit: What i have so far: INSERT INTO #tbl_RawDateBuffer ( PayrollNumber , Surname , Forename , [Month] , [Year] , AmountPayable ) SELECT PayrollNumber , Surname , Forename , [Month] , [Year] , AmountPayable FROM RawData WHERE [Max] > 1500 DECLARE @Columns AS NVARCHAR(MAX) DECLARE @StrSQL AS NVARCHAR(MAX) SET @Columns = STUFF((SELECT DISTINCT ',' + QUOTENAME(CONVERT(VARCHAR(4), c.[Year]) + RIGHT('00' + CONVERT(VARCHAR(2), c.[Month]), 2)) FROM #tbl_RawDateBuffer c FOR XML PATH('') , TYPE ).value('.', 'NVARCHAR(MAX)'), 1, 1, '') SET @StrSQL = 'SELECT PayrollNumber, ' + @Columns + ' from ( select PayrollNumber , CONVERT(VARCHAR(4), [Year]) + RIGHT(''00'' + CONVERT(VARCHAR(2), [Month]), 2) dt from #tbl_RawDateBuffer ) x pivot ( sum(AmountPayable) for dt in (' + @Columns + ') ) p ' EXECUTE(@StrSQL) DROP TABLE #tbl_RawDateBuffer

    Read the article

  • Project Euler Question 14 (Collatz Problem)

    - by paradox
    The following iterative sequence is defined for the set of positive integers: n -n/2 (n is even) n -3n + 1 (n is odd) Using the rule above and starting with 13, we generate the following sequence: 13 40 20 10 5 16 8 4 2 1 It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1. Which starting number, under one million, produces the longest chain? NOTE: Once the chain starts the terms are allowed to go above one million. I tried coding a solution to this in C using the bruteforce method. However, it seems that my program stalls when trying to calculate 113383. Please advise :) #include <stdio.h> #define LIMIT 1000000 int iteration(int value) { if(value%2==0) return (value/2); else return (3*value+1); } int count_iterations(int value) { int count=1; //printf("%d\n", value); while(value!=1) { value=iteration(value); //printf("%d\n", value); count++; } return count; } int main() { int iteration_count=0, max=0; int i,count; for (i=1; i<LIMIT; i++) { printf("Current iteration : %d\n", i); iteration_count=count_iterations(i); if (iteration_count>max) { max=iteration_count; count=i; } } //iteration_count=count_iterations(113383); printf("Count = %d\ni = %d\n",max,count); }

    Read the article

  • Java anagram recursion List<List<String>> only storing empty lists<Strings>

    - by Riff Rafffer
    Hi In this recursion method i am trying to find all anagrams and add it to a List but what happens when i run this code is it just returns alot of empty Lists. private List<List<String>> findAnagrams(LetterInventory words, ArrayList<String> anagram, int max, Map<String, LetterInventory> smallDict, int level, List<List<String>> result) { ArrayList<String> solvedWord = new ArrayList<String>(); LetterInventory shell; LetterInventory shell2; if (level < max || max == 0) { Iterator<String> it = smallDict.keySet().iterator(); while (it.hasNext()) { String k = it.next(); shell = new LetterInventory(k); shell2 = words; if (shell2.subtract(shell) != null) { anagram.add(k); shell2 = words.subtract(shell); if (shell2.isEmpty()) { //System.out.println(anagram.toString()); it prints off fine here result.add(anagram); // but doesnt add here } else findAnagrams(shell2, anagram, max, smallDict, level + 1, result); anagram.remove(anagram.size()-1); } } } return results; }

    Read the article

  • SQL Server 2005, wide indexes, computed columns, and sargable queries

    - by luksan
    In my database, assume we have a table defined as follows: CREATE TABLE [Chemical]( [ChemicalId] int NOT NULL IDENTITY(1,1) PRIMARY KEY, [Name] nvarchar(max) NOT NULL, [Description] nvarchar(max) NULL ) The value for Name can be very large, so we must use nvarchar(max). Unfortunately, we want to create an index on this column, but nvarchar(max) is not supported inside an index. So we create the following computed column and associated index based upon it: ALTER TABLE [Chemical] ADD [Name_Indexable] AS LEFT([Name], 20) CREATE INDEX [IX_Name] ON [Chemical]([Name_Indexable]) INCLUDE([Name]) The index will not be unique but we can enforce uniqueness via a trigger. If we perform the following query, the execution plan results in a index scan, which is not what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' However, if we modify the query to make it "sargable," then the execution plan results in an index seek, which is what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Indexable_Name]='[1,1''-Bicyclohexyl]-' AND [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' Is this a good solution if we control the format of all queries executed against the database via our middle tier? Is there a better way? Is this a major kludge? Should we be using full-text indexing?

    Read the article

  • how to get the real bounds with google maps when fully zoomed out

    - by brad
    I have a map that shows location points based on the gbounds of the map. For example, any time the map is moved/zoomed, i find the bounds and query for locations that fall within those bounds. Unfortunately I'm unable to display all my locations when fully zoomed out. Reason being, gmaps reports the min/max long as whatever is at the edge of the map, but if you zoom out enough, you can get a longitudinal range that excludes visible locations. For instance, if you zoom your map so that you see NorthAmerica twice, on the far left and far right. The min/max long are around: -36.5625 to 170.15625. But this almost completely excludes NorthAmerica which lies in the -180 to -60 range. Obviously this is bothersome as you can actually see the continent NorthAmerica (twice), but when I query my for locations in the range from google maps, NorthAmerica isn't returned. My code for finding the min/max long is: bounds = gmap.getBounds(); min_lat = bounds.getSouthWest().lat() max_lat = bounds.getNorthEast().lat() Has anyone encountered this and can anyone suggest a workaround? Off the top of my head I can only thing of a hack: to check the zoom level and hardcode the min/max lats to -180/180 if necessary, which is definitely unacceptable.

    Read the article

  • ruby comma operator and step question

    - by ryan_m
    so, i'm trying to learn ruby by doing some project euler questions, and i've run into a couple things i can't explain, and the comma ?operator? is in the middle of both. i haven't been able to find good documentation for this, maybe i'm just not using the google as I should, but good ruby documentation seems a little sparse . . . 1: how do you describe how this is working? the first snippet is the ruby code i don't understand, the second is the code i wrote that does the same thing only after painstakingly tracing the first: #what is this doing? cur, nxt = nxt, cur + nxt #this, apparently, but how to describe the above? nxt = cur + nxt cur = nxt - cur 2: in the following example, how do you describe what the line with 'step' is doing? from what i can gather, the step command works like (range).step(step_size), but this seems to be doing (starting_point).step(ending_point, step_size). Am i right with this assumption? where do i find good doc of this? #/usr/share/doc/ruby1.9.1-examples/examples/sieve.rb # sieve of Eratosthenes max = Integer(ARGV.shift || 100) sieve = [] for i in 2 .. max sieve[i] = i end for i in 2 .. Math.sqrt(max) next unless sieve[i] (i*i).step(max, i) do |j| sieve[j] = nil end end puts sieve.compact.join(", ")

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >