Search Results

Search found 9788 results on 392 pages for 'character limit'.

Page 119/392 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Java Fx Data bind not working with File Read

    - by rjha94
    Hi I am using a very simple JavaFx client to upload files. I read the file in chunks of 1 MB (using Byte Buffer) and upload using multi part POST to a PHP script. I want to update the progress bar of my client to show progress after each chunk is uploaded. The calculations for upload progress look correct but the progress bar is not updated. I am using bind keyword. I am no JavaFx expert and I am hoping some one can point out my mistake. I am writing this client to fix the issues posted here (http://stackoverflow.com/questions/2447837/upload-1gb-files-using-chunking-in-php) /* * Main.fx * * Created on Mar 16, 2010, 1:58:32 PM */ package webgloo; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.layout.VBox; import javafx.geometry.VPos; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.layout.HBox; import javafx.scene.layout.LayoutInfo; import javafx.scene.text.Font; import javafx.scene.control.ProgressBar; import java.io.FileInputStream; /** * @author rajeev jha */ var totalBytes:Float = 1; var bytesWritten:Float = 0; var progressUpload:Float; var uploadURI = "http://www.test1.com/test/receiver.php"; var postMax = 1024000 ; function uploadFile(inputFile: java.io.File) { totalBytes = inputFile.length(); bytesWritten = 1; println("To-Upload - {totalBytes}"); var is = new FileInputStream(inputFile); var fc = is.getChannel(); //1 MB byte buffer var chunkCount = 0; var bb = java.nio.ByteBuffer.allocate(postMax); while(fc.read(bb) >= 0){ println("loop:start"); bb.flip(); var limit = bb.limit(); var bytes = GigaFileUploader.getBufferBytes(bb.array(), limit); var content = GigaFileUploader.createPostContent(inputFile.getName(), bytes); GigaFileUploader.upload(uploadURI, content); bytesWritten = bytesWritten + limit ; progressUpload = 1.0 * bytesWritten / totalBytes ; println("Progress is - {progressUpload}"); chunkCount++; bb.clear(); println("loop:end"); } } var label = Label { font: Font { size: 12 } text: bind "Uploaded - {bytesWritten * 100 / (totalBytes)}%" layoutInfo: LayoutInfo { vpos: VPos.CENTER maxWidth: 120 minWidth: 120 width: 120 height: 30 } } def jFileChooser = new javax.swing.JFileChooser(); jFileChooser.setApproveButtonText("Upload"); var button = Button { text: "Upload" layoutInfo: LayoutInfo { width: 100 height: 30 } action: function () { var outputFile = jFileChooser.showOpenDialog(null); if (outputFile == javax.swing.JFileChooser.APPROVE_OPTION) { uploadFile(jFileChooser.getSelectedFile()); } } } var hBox = HBox { spacing: 10 content: [label, button] } var progressBar = ProgressBar { progress: bind progressUpload layoutInfo: LayoutInfo { width: 240 height: 30 } } var vBox = VBox { spacing: 10 content: [hBox, progressBar] layoutX: 10 layoutY: 10 } Stage { title: "Upload File" width: 270 height: 120 scene: Scene { content: [vBox] } resizable: false }

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • Failed to Upload a File

    - by CrazyNick
    User 'X' is the site-collection owner. He tries to upload a 500kb file into a document library, got the error "The server has aborted your upload. The files selected may exceed the server's upload size limit. If you are transfering a large group of files, try uploading fewer at a time." however web-application owners are able to upload the file. what would be the issue, any thoughts? Upload size limit for a file – 5 MB Site Quota template set – 50 MB Used Site Quota – 10 MB

    Read the article

  • Why would you ever set MaxKeepAliveRequests to anything but unlimited?

    - by Jonathon Reinhart
    Apache's KeepAliveTimeout exists to close a keep-alive connection if a new request is not issued within a given period of time. Provided the user does not close his browser/tab, this timeout (usually 5-15 seconds) is what eventually closes most keep-alive connections, and prevents server resources from being wasted by holding on to connections indefinitely. Now the MaxKeepAliveRequests directive puts a limit on the number of HTTP requests that a single TCP connection (left open due to KeepAlive) will serve. Setting this to 0 means an unlimited number of requests are allowed. Why would you ever set this to anything but "unlimited"? Provided a client is still actively making requests, what harm is there in letting them happen on the same keep-alive connection? Once the limit is reached, the requests still come in, just on a new connection. The way I see it, there is no point in ever limiting this. What am I missing?

    Read the article

  • Postgres error messages in Apache error log

    - by Warren Pena
    I'm running Apache 2.2, PHP 5.2, and Postgres 8.2 on Windows, and I'm seeing something funky in Apache's error.log file. Occasionally, I'll see the message "row number -1 is out of range 0..-1" pop up over and over. Unlike all the other lines in that log file, there's no timestamp or log level. Just that exact string. Googling around, it appears that message is, character for character, a common Postgres error message, but is not an Apache error. I've seen it happen multiple times, and on several different servers. I can't seem to reproduce it, though. I've tried throwing all sorts of error ridden database queries and result set inquiries at Postgres via PHP, and none of them seem to trigger that line being written to the log file. Is it possible for Postgres errors to be ending up in my Apache log file, and if so, how? What would trigger an error message like that? Thanks!

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • htaccess Redirect / RedirectMatch with URLs that contain Special / Encoded Characters

    - by dSquared
    I'm currently in the process of applying a variety of 301 redirects in an .htaccess file for a website that recently changed its structure. Everything is working as expected, except for URLs that contain special characters, for these I am getting 404 errors. For example the following directives that have a registered trademark symbol (®) bring up 404 pages: RedirectMatch 301 ^/directory/link-with®-special-character(/)?$ somelink.com RedirectMatch 301 ^/directory/link-with%c2%ae-special-character(/)?$ somelink.com I've also tried using Redirect, RewriteRule and surrounding the urls with double quotes and nothing seems to work. Does anyone know what might be happening or the proper way to handle these types of directives? Any help is greatly appreciated.

    Read the article

  • OpenVPN bandwith restrictrictions and cpu power needed

    - by user197664
    In Open VPN is there a way to set a maximum limit of data and speed per user? Say user "reptar' is abusing the VPN and I wanted to limit his/her speeds and/or data how would one go about doing this? Would I need to know the IP address of the abuser? Also, I have seen articles around the internet about turing a Rasberry PI in to a VPN server. If I did such a thing how many users would this device be able to handle at a given time? I believe it runs at 512 gb and clocks at around 700 mhz.

    Read the article

  • Postgres error messages in Apache error log

    - by Warren Pena
    I'm running Apache 2.2, PHP 5.2, and Postgres 8.2 on Windows, and I'm seeing something funky in Apache's error.log file. Occasionally, I'll see the message "row number -1 is out of range 0..-1" pop up over and over. Unlike all the other lines in that log file, there's no timestamp or log level. Just that exact string. Googling around, it appears that message is, character for character, a common Postgres error message, but is not an Apache error. I've seen it happen multiple times, and on several different servers. I can't seem to reproduce it, though. I've tried throwing all sorts of error ridden database queries and result set inquiries at Postgres via PHP, and none of them seem to trigger that line being written to the log file. Is it possible for Postgres errors to be ending up in my Apache log file, and if so, how? What would trigger an error message like that? Thanks!

    Read the article

  • $HTTP_HOST multiple rewrite

    - by nrivoli
    I have Apache 2.2, Ubuntu 12. I want to load a different envionment based on my HTTP_HOST, this works for the first domain only, after that I get error 500, " Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace." and I am not able to see my error... RewriteEngine on RewriteBase / RewriteCond %{HTTPS} off RewriteRule ^ - [F] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{HTTP_HOST} ^server-dev.domain.tld$ [NC] RewriteRule ^(.*)$ /api/dev.php/$1 [L] RewriteCond %{HTTP_HOST} ^server-qa.domain.tld$ [NC] RewriteRule ^(.*)$ /api/qa.php/$1 [L] #RewriteCond %{HTTP_HOST} ^localhost$ [NC] #RewriteRule ^(.*)$ /api/localhost.php/$1 [L] I am accesing to https://server-dev.domain.tl/api/whatever https://server-qa.domain.tl/api/whatever

    Read the article

  • Google Search Appliance: Limiting Number of Results

    - by senfo
    I am attempting to limit the number of results that are displayed as a result of dynamic result clustering on the Google Search Appliance. I've looked through the XSLT, but I've only come across the following two user-modifiable options: <!-- *** dyanmic result cluster options *** --> <xsl:variable name="show_res_clusters">1</xsl:variable> <xsl:variable name="res_cluster_position">right</xsl:variable> Are there more options that I'm unaware of that I could use to limit the results? Is there another way that I'm missing?

    Read the article

  • Query doesn't use a covering-index when applicable

    - by Dor
    I've downloaded the employees database and executed some queries for benchmarking purposes. Then I noticed that one query didn't use a covering index, although there was a corresponding index that I created earlier. Only when I added a FORCE INDEX clause to the query, it used a covering index. I've uploaded two files, one is the executed SQL queries and the other is the results. Can you tell why the query uses a covering-index only when a FORCE INDEX clause is added? The EXPLAIN shows that in both cases, the index dept_no_from_date_idx is being used anyway. To adapt myself to the standards of SO, I'm also writing the content of the two files here: The SQL queries: USE employees; /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date); /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp"; DESCRIBE dept_emp; SHOW KEYS IN dept_emp; /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); The results: -------------- /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date) -------------- Query OK, 331603 rows affected (33.95 sec) Records: 331603 Duplicates: 0 Warnings: 0 -------------- /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp" -------------- +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | dept_emp | InnoDB | 10 | Compact | 331883 | 36 | 12075008 | 0 | 21544960 | 29360128 | NULL | 2010-05-04 13:07:49 | NULL | NULL | utf8_general_ci | NULL | | | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ 1 row in set (0.47 sec) -------------- DESCRIBE dept_emp -------------- +-----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+-------+ | emp_no | int(11) | NO | PRI | NULL | | | dept_no | char(4) | NO | PRI | NULL | | | from_date | date | NO | | NULL | | | to_date | date | NO | | NULL | | +-----------+---------+------+-----+---------+-------+ 4 rows in set (0.05 sec) -------------- SHOW KEYS IN dept_emp -------------- +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | dept_emp | 0 | PRIMARY | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 0 | PRIMARY | 2 | dept_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | emp_no | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no | 1 | dept_no | A | 7 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 1 | dept_no | A | 13 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 2 | from_date | A | 165941 | NULL | NULL | | BTREE | | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 6 rows in set (0.23 sec) -------------- /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no,dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 21402 | Using where | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ 3 rows in set (0.09 sec) -------------- /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 37468 | Using where; Using index | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ 3 rows in set (0.05 sec) Bye

    Read the article

  • size of extent on LVM2

    - by piotrek
    in LVM1 there was a limit of 65k extends. So size of extent had to been chosen carefully between wasted space on partitions (to big extent) and maximal possible size of logical volume (too small extent). in lvm2 (according to http://docstore.mik.ua/manuals/hp-ux/en/5992-4589/apa.html) the limit is ~16 million extents. so the default size of 4mb gives ~60TB of LV size. so is there any point in making the extent larger than 4-16mb on a desktop? is there any performance degradation or other costs of having big number of extents?

    Read the article

  • Keep printed documents on Windows Server 2008 R2 Print Server

    - by MadBoy
    I've setup Windows 2008 R2 as print server. I have checked option Keep printed document option for all printers and it works fine. Users print their stuff and i can see what they are doing. Problem is everyone sees all documents that are getting printed which is not always the best idea. Is there a way to: Limit print jobs to be only seen by people who printed them and admins Limit print jobs to be only seen on server (from within Server Manager) and so print jobs dissapear when print job is done from user queue (but then admins are still able to see it and track what's printed and when for reporting purposes). Create some kind of access level list so that some people can see everything geting printed, some people see their print jobs and some people see nothing :-)

    Read the article

  • How to copy files from shadow copy with long source path

    - by Jake
    The files and folders in my shared network drive (set up with DFS) were mass deleted. Currently I am trying to recover the files from the shadow copy "Previous Version". Problem is, thousands of files are deeply nested with long paths making the file path too long. When copying, it shows the dialog "Source Path Too Long". My guess is that the file path just barely hits the limit when saved into the network drive, but shadaw copy service appends the date and time to the folders so the path character limit is exceeded. How else can I copy the files from shadow copy?

    Read the article

  • Need Recommendations: Network Software and Hardware Setup for small firm

    - by Rogue
    Will be starting a small graphics design firm soon, with 20 employees. Therefore need software to manage the network. Have bought a bulk license of Windows 7. I have a spare computer which can act as a server if necessary, but its an ancient Dell machine (Pentium-III). If required I would purchase an extra machine, but would like to avoid unnecessary costs at start up. Following are the main functions that I would like to perform: Need to monitor\control network traffic and internet usage, restrict access to certain websites Alerts when access to certain software's, and when trying to tamper with privileges Ability to view desktops of any computer at any given time Limit access to certain hardware like USB ports,etc Limit access to folders on the computer Log/Report of all actions including keystrokes performed on any computer Local Network chat and talk client Collaboration and Work logs Any Software available to do all of the above and also any additional hardware required besides network switches, network card's and CAT5e cables. Any other recommendations besides the above mentioned hardware setup

    Read the article

  • Oracle account locked

    - by Priya
    Hi All, I had a user in my oracle DB with some 'x' password for sometime. Without notifying my team I changed the password to 'y'. But my team members tried to connect to the machine with the old passowrd 'x' and as the limit was set, the user account got locked. I know how to set the resource limit for the login. It would be helpful if anyone can help in finding who and all has tried to connect to the DB. As a administrator I would like to view from where the connection was from. Thanks in advance. Priya.R

    Read the article

  • iptables logging to diferent file via syslog-ng

    - by rahrahruby
    I have the following configuration in my iptables and syslog files: IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 222 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT -A INPUT -j DROP -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 SYSLOG-NG destination d_iptables { file("/var/log/iptables/iptables.log"); }; filter f_iptables { facility(kern) and match("IN=" value("MESSAGE")) and match("OUT=" value("MESSAGE")); }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news) and not filter(f_iptables); }; log { source(s_src); filter(f_iptables); destination(d_iptables); };` I restart syslog-ng and the log is not written.

    Read the article

  • Hebrew (utf8) characters in windows cmd console

    - by epeleg
    I previously asked this Q: utf8 hebrew on mysql console on debian (via putty on windows) And managed to get it working by starting mysql with --default-character-set=utf8 and setting putty to show utf8 as well. Now I need to do the same but on a windows server. The data is again the same but when I start mysql with --default-character-set=utf8 it I see multuple characters where I am supposed to see hebrew. I think the problem is with the set up of windows cmd console that it does not properly display utf8. any ideas ?

    Read the article

  • Cisco IOS ACL types

    - by cjavapro
    The built in command help list displays access list types based on which range. router1(config)#access-list ? <1-99> IP standard access list <100-199> IP extended access list <1100-1199> Extended 48-bit MAC address access list <1300-1999> IP standard access list (expanded range) <200-299> Protocol type-code access list <2000-2699> IP extended access list (expanded range) <700-799> 48-bit MAC address access list dynamic-extended Extend the dynamic ACL absolute timer rate-limit Simple rate-limit specific access list router1(config)# What are each of the types? Can multiple types of ACLs be applied to a given interface?

    Read the article

  • Most efficient Way to setup a game server

    - by alex bowers
    I'm running a PHP based game which has over 45 Million members predicted for end of this year (2011) Currently we are on 7.5 Million, this game is being ran on facebook and I am in desperate need to help get this game server as efficient and as powerful as possible. it is a dedicated server with Processor Manufacturer Intel Model i7 920 Frequency 4x 2x 2.66 GHz NIC GigaEthernet RAM 12 GB Hard disk 4 x 1 TB specs. It has apache installed, cPanel, phpMyAdmin, several apache mods and MySQL. The game also runs 47 mysql calls per second per user. Is there any alternatives to the above which could be faster, more efficient etc? I dont mind having to recode the game to fit to it, as long as it maximises our upper limit of members on the game. Thanks Also, is there a way to tell what our maximum limit to players, database calls etc is? Thank you again, hope you guys can help :)

    Read the article

  • iptables & allowed port refusing connection

    - by marfarma
    Can you see what I'm doing wrong? On Ubuntu Server 9.1, I'm attempting to allow traffic on port 1143 for a non-privileged IMAP host. Connection is refused when testing with telnet example.com 1143 but connection is allowed testing with telnet example.com 80 from my pc to remote internet hosted server. Both rules appear identical and are located near each other with no rules rejecting connections intervening in the rules file. I can't figure it out. iptables -L returns this: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere REJECT all -- anywhere 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:7070 ACCEPT tcp -- anywhere anywhere tcp dpt:1143 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT icmp -- anywhere anywhere icmp echo-request LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere and my rules file contains this: # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *nat :PREROUTING ACCEPT [3556:217296] :POSTROUTING ACCEPT [6909:414847] :OUTPUT ACCEPT [6909:414847] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 COMMIT # Completed on Wed May 26 19:08:34 2010 # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *filter :INPUT ACCEPT [1:52] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:212] -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT -A INPUT -p tcp -m tcp --dport 7070 -j ACCEPT -A INPUT -p tcp -m tcp --dport 1143 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j REJECT --reject-with icmp-port-unreachable -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j ACCEPT COMMIT # Completed on Wed May 26 19:08:34 2010

    Read the article

  • force https with apache before .htpasswd

    - by johnlai2004
    I have this in my .htaccess file RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://www.myweb.com/phpmyadmin$1 [R,L] AuthUserFile /var/www/myweb/.htpasswd AuthGroupFile /dev/null AuthName "Sovereign Databases" AuthType Basic <Limit GET> require valid-user </Limit> But everytime I go to http://www.myweb.com/phpmyadmin, the .htpasswd prompts me for a credentials BEFORE i'm redirected to https://www.myweb.com/phpmyadmin. After I type in my username and password, I get redirected to https://www.myweb.com/phpmyadmin. The problem is that I don't want anyone to submit their username and password unencrypted via http. How do I force people to login via the https version even if they typed in the http version?

    Read the article

  • Parameters for selection of Operating system, memory and processor for embedded system ?

    - by James
    I am developing an embedded real time system software (in C language). I have designed the s/w architecture - we know various objects required, interactions required between various objects and IPC communication between tasks. Based on this information, i need to decide on the operating system(RTOS), microprocessor and memory size requirements. (Most likely i would be using Quadros, as it has been suggested by the client based on their prior experience in similar projects) But i am confused about which one to begin with, since choice of one could impact the selection of other. Could you also guide me on parameters to consider to estimate the memory requirements from the s/w design (lower limit and upper limit of memory requirement) ? (Cost of the component(s) could be ignored for this evaluation)

    Read the article

  • Insert text into a div from a poput form and rewrite the inserted text with javascript

    - by kuswantin
    So I read some related questions here before I asked, but can't find the answer to my problem. Hope some javascript master here can find this and bring the light to me. I created another button for nicEdit, a video button and wanted to insert some formatted text into the editor DIV (note: nicEdit has inline DIV, no iframe, no textarea). This is my button, recreated from the image button: var nicVideoOptions = { buttons : { 'video' : {name : __('Insert Video'), type : 'nicEditorVideoButton'} //, tags : ['VIDEO:'] }, iconFiles : {'video' : '../movie.png'} }; var nicEditorVideoButton = nicEditorAdvancedButton.extend({ addPane : function() { this.vi = this.ne.selectedInstance.selElm().parentTag('A'); this.addForm({ '' : {type : 'title', txt : 'Insert Video URL'}, 'href' : {type : 'text', txt : 'URL', 'value' : 'http://', style : {width: '150px'}} },this.vi); }, submit : function(e) { var vidVal = this.inputs['href'].value; if(vidVal == "" || vidVal == "http://") { alert("Enter the video url"); return false; } this.removePane(); if(!this.vi) { var tmp = 'javascript:nicVidTemp();'; this.ne.nicCommand("insertVideo",tmp); // still nothing works //this.vi = this.findElm('VIDEO:','href',tmp); //this.vi = this.setContent('[video:' + this.inputs['href'].value + ']'); //nicEditors.findEditor('edit-comment').setContent('<strong>Some HTML</strong> here'); //this.vi = this.setContent('<strong>Some HTML</strong> here'); insertAtCaret(this.ne.selectedInstance, vidVal); } if(this.vi) { // still nothing works //this.vi.setAttributes({ //vidVal : this.inputs['href'].value //}); //this.vi = this.setContent('[video:' + this.inputs['href'].value + ']'); //this.vi = this.setContent('<strong>Some HTML</strong> here'); } } }); nicEditors.registerPlugin(nicPlugin,nicVideoOptions); The button is there, the form poput like the image button, so it's okay. But can't insert the text into the DIV. The final output will be taken from this: ('[video:' + this.inputs['href'].value + ']') and displayed in the editor DIV as is: [video:http//some.com/video-url] As you see, I am blindly touching everything :) And this insertion is taken from: http://www.scottklarr.com/topic/425/how-to-insert-text-into-a-textarea-where-the-cursor-is/ function insertAtCaret(areaId,text) { var txtarea = document.getElementById(areaId); var scrollPos = txtarea.scrollTop; var strPos = 0; var br = ((txtarea.selectionStart || txtarea.selectionStart == '0') ? "ff" : (document.selection ? "ie" : false ) ); if (br == "ie") { txtarea.focus(); var range = document.selection.createRange(); range.moveStart ('character', -txtarea.value.length); strPos = range.text.length; } else if (br == "ff") strPos = txtarea.selectionStart; var front = (txtarea.value).substring(0,strPos); var back = (txtarea.value).substring(strPos,txtarea.value.length); txtarea.value=front+text+back; strPos = strPos + text.length; if (br == "ie") { txtarea.focus(); var range = document.selection.createRange(); range.moveStart ('character', -txtarea.value.length); range.moveStart ('character', strPos); range.moveEnd ('character', 0); range.select(); } else if (br == "ff") { txtarea.selectionStart = strPos; txtarea.selectionEnd = strPos; txtarea.focus(); } txtarea.scrollTop = scrollPos; } The flow: I click the button, a form popouts, fill the input text box, hit the query button and the text should appear in the editor DIV. I hope I can make myself clear. Any help would be very much appreaciated Thanks

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >