Search Results

Search found 21466 results on 859 pages for 'current buffer'.

Page 53/859 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • western digital caviar black. EXT4-fs error [migrated]

    - by azat
    Recently I update my HDD on desktop machine, and bought WD Caviar Black. But after I format & copy information to it (using dd), and fix partitions size: I have next errors in kern.log: Aug 27 16:04:35 home-spb kernel: [148265.326264] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9054, 32254 clusters in bitmap, 32258 in gd Aug 27 16:07:11 home-spb kernel: [148421.493483] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9045, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.481693] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10299, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.487147] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.258711] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4345, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.277591] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.278202] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4344, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.284760] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.291983] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9051, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297495] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.297916] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9050, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297940] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.303213] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4425, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.312127] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.312487] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4424, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.317858] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.322231] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4336, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.326250] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.326599] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4335, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.332397] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.341957] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5764, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.350709] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.351127] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5763, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.355916] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.401055] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10063, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.404357] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.414699] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10073, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.420411] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.493933] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9059, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.493956] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. One time, machine rebooted (not manually), when I turn it on, it runs fsck on /dev/sdc2 and fix some errors and some files are missing on /dev/sdc2 I'v check /dev/sdc2 for badblocks, it doesn't have it ( using e2fsck -c /dev/sdc2 ) Here is the output of fsck http://pastebin.com/D5LmLVBY What else I can do to understand what's wrong here? BTW for /dev/sdc1 no message like that, in kern.log Linux version: 3.3.0 Distributive: Debian wheezy

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • How to write a decent process filter?

    - by konr
    Hi there, I'm building a program that communicates with Emacs, and one of the challenges I'm facing is writing Emacs's process filter function. Its input string is a series of s-expressions to be evaluated. Here is a sample: (gimme-append-to-buffer "25 - William Christie dir, Les Arts Florissants - Scene 2. Prelude - Les Arts Florissants\n") (gimme-append-to-buffer "26 - William Christie dir, Les Arts Florissants - Cybele: 'Je Veux Joindre' - Les Arts Florissants\n") (gimme-append-to-buffer "27 - William Christie dir, Les Arts Florissants - Scene 3. Cybele: 'Tu T'Etonnes, Melisse' - Les Arts Florissants\n") (gimme-append-to-buffer "28 - William Christie dir, Les Arts Florissants - Cybele: 'Que Les Plus Doux Zephyrs'. Scene 4. - Les Arts Florissants\n") (gimme-append-to-buffer "29 - William Christie dir, Les Arts Florissants - Entree Des Nations - Les Arts Florissants\n") (gimme-append-to-buffer "30 - William Christie dir, Les Arts Florissants - Entree Des Zephyrs - Les Arts Florissants\n") (gimme-append-to-buffer "31 - William Christie dir, Les Arts Florissants - Choeur Des Nations' 'Que Devant Vous' - Les Arts Florissants\n") (gimme-append-to-buffer "32 - William Christie dir, Les Arts Florissants - Atys: 'Indigne Que Je Suis' - Les Arts Florissants\n") (gimme-append-to-buffer "33 - William Christie dir, Les Arts Florissants - Reprise Du Choeur Des Nations : 'Que Devant Nous' - Les Arts Florissants\n") (gimme-append-to-buffer "34 - William Christie dir, Les Arts Flor*emphasized text*issants - Reprise De L'Air Des Zephyrs - Les Arts Florissants\n") The first problem that I've faced is that the string is somehow not fully formed when the function is so called, so writing something like (mapcar 'eval (format "(%s)" input-string)) won't work. To deal with this first problem, I was using a loop. The full function I wrote is: (defun eval-all-sexps (s) (loop for x = (ignore-errors (read-from-string s)) then (ignore-errors (read-from-string (substring s position))) while x summing (or (cdr x) 0) into position doing (eval (car x)))) Now the second problem that showed up is that the function is called twice with a somewhat large input, first with valid but partial content, then with what looks like pieces of the remaining data. To solve this problem, I'm considering using a junk variable to hold up what remains from a loop and then concatenating it to the input of the next call, but I was wondering if you guys have any other suggestions on how to deal with such a problem more elegantly. Thanks!

    Read the article

  • ctypes and pointer manipulation

    - by Chris
    I am dealing with image buffers, and I want to be able to access data a few lines into my image for analysis with a c library. I have created my 8-bit pixel buffer in Python using create_string_buffer. Is there a way to get a pointer to a location within that buffer without re-creating a new buffer? My goal is to analyze and change data within that buffer in chunks, without having to do a lot of buffer creation and data copying. In this case, ultimately, the C library is doing all the manipulation of the buffer, so I don't actually have to change values within the buffer using Python. I just need to give my C function access to data within the buffer.

    Read the article

  • Blackberry Player, custom data source

    - by Alex
    Hello I must create a custom media player within the application with support for mp3 and wav files. I read in the documentation i cant seek or get the media file duration without a custom datasoruce. I checked the demo in the JDE 4.6 but i have still problems... I cant get the duration, it return much more then the expected so i`m sure i screwed up something while i modified the code to read the mp3 file locally from the filesystem. Somebody can help me what i did wrong ? (I can hear the mp3, so the player plays it correctly from start to end) I must support OSs = 4.6. Thank You Here is my modified datasource LimitedRateStreaminSource.java * Copyright © 1998-2009 Research In Motion Ltd. Note: For the sake of simplicity, this sample application may not leverage resource bundles and resource strings. However, it is STRONGLY recommended that application developers make use of the localization features available within the BlackBerry development platform to ensure a seamless application experience across a variety of languages and geographies. For more information on localizing your application, please refer to the BlackBerry Java Development Environment Development Guide associated with this release. */ package com.halcyon.tawkwidget.model; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import javax.microedition.io.Connector; import javax.microedition.io.file.FileConnection; import javax.microedition.media.Control; import javax.microedition.media.protocol.ContentDescriptor; import javax.microedition.media.protocol.DataSource; import javax.microedition.media.protocol.SourceStream; import net.rim.device.api.io.SharedInputStream; /** * The data source used by the BufferedPlayback's media player. / public final class LimitedRateStreamingSource extends DataSource { /* The max size to be read from the stream at one time. */ private static final int READ_CHUNK = 512; // bytes /** A reference to the field which displays the load status. */ //private TextField _loadStatusField; /** A reference to the field which displays the player status. */ //private TextField _playStatusField; /** * The minimum number of bytes that must be buffered before the media file * will begin playing. */ private int _startBuffer = 200000; /** The maximum size (in bytes) of a single read. */ private int _readLimit = 32000; /** * The minimum forward byte buffer which must be maintained in order for * the video to keep playing. If the forward buffer falls below this * number, the playback will pause until the buffer increases. */ private int _pauseBytes = 64000; /** * The minimum forward byte buffer required to resume * playback after a pause. */ private int _resumeBytes = 128000; /** The stream connection over which media content is passed. */ //private ContentConnection _contentConnection; private FileConnection _fileConnection; /** An input stream shared between several readers. */ private SharedInputStream _readAhead; /** A stream to the buffered resource. */ private LimitedRateSourceStream _feedToPlayer; /** The MIME type of the remote media file. */ private String _forcedContentType; /** A counter for the total number of buffered bytes */ private volatile int _totalRead; /** A flag used to tell the connection thread to stop */ private volatile boolean _stop; /** * A flag used to indicate that the initial buffering is complete. In * other words, that the current buffer is larger than the defined start * buffer size. */ private volatile boolean _bufferingComplete; /** A flag used to indicate that the remote file download is complete. */ private volatile boolean _downloadComplete; /** The thread which retrieves the remote media file. */ private ConnectionThread _loaderThread; /** The local save file into which the remote file is written. */ private FileConnection _saveFile; /** A stream for the local save file. */ private OutputStream _saveStream; /** * Constructor. * @param locator The locator that describes the DataSource. */ public LimitedRateStreamingSource(String locator) { super(locator); } /** * Open a connection to the locator. * @throws IOException */ public void connect() throws IOException { //Open the connection to the remote file. _fileConnection = (FileConnection)Connector.open(getLocator(), Connector.READ); //Cache a reference to the locator. String locator = getLocator(); //Report status. System.out.println("Loading: " + locator); //System.out.println("Size: " + _contentConnection.getLength()); System.out.println("Size: " + _fileConnection.totalSize()); //The name of the remote file begins after the last forward slash. int filenameStart = locator.lastIndexOf('/'); //The file name ends at the first instance of a semicolon. int paramStart = locator.indexOf(';'); //If there is no semicolon, the file name ends at the end of the line. if (paramStart < 0) { paramStart = locator.length(); } //Extract the file name. String filename = locator.substring(filenameStart, paramStart); System.out.println("Filename: " + filename); //Open a local save file with the same name as the remote file. _saveFile = (FileConnection) Connector.open("file:///SDCard/blackberry/music" + filename, Connector.READ_WRITE); //If the file doesn't already exist, create it. if (!_saveFile.exists()) { _saveFile.create(); } System.out.println("---------- 1"); //Open the file for writing. _saveFile.setReadable(true); //Open a shared input stream to the local save file to //allow many simultaneous readers. SharedInputStream fileStream = SharedInputStream.getSharedInputStream(_saveFile.openInputStream()); //Begin reading at the beginning of the file. fileStream.setCurrentPosition(0); System.out.println("---------- 2"); //If the local file is smaller than the remote file... if (_saveFile.fileSize() < _fileConnection.totalSize()) { System.out.println("---------- 3"); //Did not get the entire file, set the system to try again. _saveFile.setWritable(true); System.out.println("---------- 4"); //A non-null save stream is used as a flag later to indicate that //the file download was incomplete. _saveStream = _saveFile.openOutputStream(); System.out.println("---------- 5"); //Use a new shared input stream for buffered reading. _readAhead = SharedInputStream.getSharedInputStream(_fileConnection.openInputStream()); System.out.println("---------- 6"); } else { //The download is complete. System.out.println("---------- 7"); _downloadComplete = true; //We can use the initial input stream to read the buffered media. _readAhead = fileStream; System.out.println("---------- 8"); //We can close the remote connection. _fileConnection.close(); System.out.println("---------- 9"); } if (_forcedContentType != null) { //Use the user-defined content type if it is set. System.out.println("---------- 10"); _feedToPlayer = new LimitedRateSourceStream(_readAhead, _forcedContentType); System.out.println("---------- 11"); } else { System.out.println("---------- 12"); //Otherwise, use the MIME types of the remote file. // _feedToPlayer = new LimitedRateSourceStream(_readAhead, _fileConnection)); } System.out.println("---------- 13"); } /** * Destroy and close all existing connections. */ public void disconnect() { try { if (_saveStream != null) { //Destroy the stream to the local save file. _saveStream.close(); _saveStream = null; } //Close the local save file. _saveFile.close(); if (_readAhead != null) { //Close the reader stream. _readAhead.close(); _readAhead = null; } //Close the remote file connection. _fileConnection.close(); //Close the stream to the player. _feedToPlayer.close(); } catch (Exception e) { System.err.println(e.getMessage()); } } /** * Returns the content type of the remote file. * @return The content type of the remote file. */ public String getContentType() { return _feedToPlayer.getContentDescriptor().getContentType(); } /** * Returns a stream to the buffered resource. * @return A stream to the buffered resource. */ public SourceStream[] getStreams() { return new SourceStream[] { _feedToPlayer }; } /** * Starts the connection thread used to download the remote file. */ public void start() throws IOException { //If the save stream is null, we have already completely downloaded //the file. if (_saveStream != null) { //Open the connection thread to finish downloading the file. _loaderThread = new ConnectionThread(); _loaderThread.start(); } } /** * Stop the connection thread. */ public void stop() throws IOException { //Set the boolean flag to stop the thread. _stop = true; } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented Controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented Controls. return null; } /** * Force the lower level stream to a given content type. Must be called * before the connect function in order to work. * @param contentType The content type to use. */ public void setContentType(String contentType) { _forcedContentType = contentType; } /** * A stream to the buffered media resource. */ private final class LimitedRateSourceStream implements SourceStream { /** A stream to the local copy of the remote resource. */ private SharedInputStream _baseSharedStream; /** Describes the content type of the media file. */ private ContentDescriptor _contentDescriptor; /** * Constructor. Creates a LimitedRateSourceStream from * the given InputStream. * @param inputStream The input stream used to create a new reader. * @param contentType The content type of the remote file. */ LimitedRateSourceStream(InputStream inputStream, String contentType) { System.out.println("[LimitedRateSoruceStream]---------- 1"); _baseSharedStream = SharedInputStream.getSharedInputStream(inputStream); System.out.println("[LimitedRateSoruceStream]---------- 2"); _contentDescriptor = new ContentDescriptor(contentType); System.out.println("[LimitedRateSoruceStream]---------- 3"); } /** * Returns the content descriptor for this stream. * @return The content descriptor for this stream. */ public ContentDescriptor getContentDescriptor() { return _contentDescriptor; } /** * Returns the length provided by the connection. * @return long The length provided by the connection. */ public long getContentLength() { return _fileConnection.totalSize(); } /** * Returns the seek type of the stream. */ public int getSeekType() { return RANDOM_ACCESSIBLE; //return SEEKABLE_TO_START; } /** * Returns the maximum size (in bytes) of a single read. */ public int getTransferSize() { return _readLimit; } /** * Writes bytes from the buffer into a byte array for playback. * @param bytes The buffer into which the data is read. * @param off The start offset in array b at which the data is written. * @param len The maximum number of bytes to read. * @return the total number of bytes read into the buffer, or -1 if * there is no more data because the end of the stream has been reached. * @throws IOException */ public int read(byte[] bytes, int off, int len) throws IOException { System.out.println("[LimitedRateSoruceStream]---------- 5"); System.out.println("Read Request for: " + len + " bytes"); //Limit bytes read to our readLimit. int readLength = len; System.out.println("[LimitedRateSoruceStream]---------- 6"); if (readLength > getReadLimit()) { readLength = getReadLimit(); } //The number of available byes in the buffer. int available; //A boolean flag indicating that the thread should pause //until the buffer has increased sufficiently. boolean paused = false; System.out.println("[LimitedRateSoruceStream]---------- 7"); for (;;) { available = _baseSharedStream.available(); System.out.println("[LimitedRateSoruceStream]---------- 8"); if (_downloadComplete) { //Ignore all restrictions if downloading is complete. System.out.println("Complete, Reading: " + len + " - Available: " + available); return _baseSharedStream.read(bytes, off, len); } else if(_bufferingComplete) { if (paused && available > getResumeBytes()) { //If the video is paused due to buffering, but the //number of available byes is sufficiently high, //resume playback of the media. System.out.println("Resuming - Available: " + available); paused = false; return _baseSharedStream.read(bytes, off, readLength); } else if(!paused && (available > getPauseBytes() || available > readLength)) { //We have enough information for this media playback. if (available < getPauseBytes()) { //If the buffer is now insufficient, set the //pause flag. paused = true; } System.out.println("Reading: " + readLength + " - Available: " + available); return _baseSharedStream.read(bytes, off, readLength); } else if(!paused) { //Set pause until loaded enough to resume. paused = true; } } else { //We are not ready to start yet, try sleeping to allow the //buffer to increase. try { Thread.sleep(500); } catch (Exception e) { System.err.println(e.getMessage()); } } } } /** * @see javax.microedition.media.protocol.SourceStream#seek(long) */ public long seek(long where) throws IOException { _baseSharedStream.setCurrentPosition((int) where); return _baseSharedStream.getCurrentPosition(); } /** * @see javax.microedition.media.protocol.SourceStream#tell() */ public long tell() { return _baseSharedStream.getCurrentPosition(); } /** * Close the stream. * @throws IOException */ void close() throws IOException { _baseSharedStream.close(); } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented controls. return null; } } /** * A thread which downloads the remote file and writes it to the local file. */ private final class ConnectionThread extends Thread { /** * Download the remote media file, then write it to the local * file. * @see java.lang.Thread#run() */ public void run() { try { byte[] data = new byte[READ_CHUNK]; int len = 0; //Until we reach the end of the file. while (-1 != (len = _readAhead.read(data))) { _totalRead += len; if (!_bufferingComplete && _totalRead > getStartBuffer()) { //We have enough of a buffer to begin playback. _bufferingComplete = true; System.out.println("Initial Buffering Complete"); } if (_stop) { //Stop reading. return; } } System.out.println("Downloading Complete"); System.out.println("Total Read: " + _totalRead); //If the downloaded data is not the same size //as the remote file, something is wrong. if (_totalRead != _fileConnection.totalSize()) { System.err.println("* Unable to Download entire file *"); } _downloadComplete = true; _readAhead.setCurrentPosition(0); //Write downloaded data to the local file. while (-1 != (len = _readAhead.read(data))) { _saveStream.write(data); } } catch (Exception e) { System.err.println(e.toString()); } } } /** * Gets the minimum forward byte buffer which must be maintained in * order for the video to keep playing. * @return The pause byte buffer. */ int getPauseBytes() { return _pauseBytes; } /** * Sets the minimum forward buffer which must be maintained in order * for the video to keep playing. * @param pauseBytes The new pause byte buffer. */ void setPauseBytes(int pauseBytes) { _pauseBytes = pauseBytes; } /** * Gets the maximum size (in bytes) of a single read. * @return The maximum size (in bytes) of a single read. */ int getReadLimit() { return _readLimit; } /** * Sets the maximum size (in bytes) of a single read. * @param readLimit The new maximum size (in bytes) of a single read. */ void setReadLimit(int readLimit) { _readLimit = readLimit; } /** * Gets the minimum forward byte buffer required to resume * playback after a pause. * @return The resume byte buffer. */ int getResumeBytes() { return _resumeBytes; } /** * Sets the minimum forward byte buffer required to resume * playback after a pause. * @param resumeBytes The new resume byte buffer. */ void setResumeBytes(int resumeBytes) { _resumeBytes = resumeBytes; } /** * Gets the minimum number of bytes that must be buffered before the * media file will begin playing. * @return The start byte buffer. */ int getStartBuffer() { return _startBuffer; } /** * Sets the minimum number of bytes that must be buffered before the * media file will begin playing. * @param startBuffer The new start byte buffer. */ void setStartBuffer(int startBuffer) { _startBuffer = startBuffer; } } And in this way i use it: LimitedRateStreamingSource source = new LimitedRateStreamingSource("file:///SDCard/music3.mp3"); source.setContentType("audio/mpeg"); mediaPlayer = javax.microedition.media.Manager.createPlayer(source); mediaPlayer.addPlayerListener(this); mediaPlayer.realize(); mediaPlayer.prefetch(); After start i use mediaPlayer.getDuration it returns lets say around 24:22 (the inbuild media player in the blackberry say the file length is 4:05) I tried to get the duration in the listener and there unfortunatly returned around 64 minutes, so im sure something is not good inside the datasoruce....

    Read the article

  • C# Compress Triple Byte Array

    - by Mark
    Hi. I currently got this script, which compresses byte arrays. But I need it rewritten, so it can compress triple byte arrays [,,] Thanks! public static byte[] Compress(byte[] buffer) { MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(buffer, 0, buffer.Length); zip.Close(); ms.Position = 0; MemoryStream outStream = new MemoryStream(); byte[] compressed = new byte[ms.Length]; ms.Read(compressed, 0, compressed.Length); byte[] gzBuffer = new byte[compressed.Length + 4]; Buffer.BlockCopy(compressed, 0, gzBuffer, 4, compressed.Length); Buffer.BlockCopy(BitConverter.GetBytes(buffer.Length), 0, gzBuffer, 0, 4); return gzBuffer; } public static byte[] Decompress(byte[] gzBuffer) { MemoryStream ms = new MemoryStream(); int msgLength = BitConverter.ToInt32(gzBuffer, 0); ms.Write(gzBuffer, 4, gzBuffer.Length - 4); byte[] buffer = new byte[msgLength]; ms.Position = 0; GZipStream zip = new GZipStream(ms, CompressionMode.Decompress); zip.Read(buffer, 0, buffer.Length); return buffer; }

    Read the article

  • How do access a secure website within a sharepoint webpart?

    - by Bill
    How do access a secure website within a sharepoint webpart? The following code works fine as a console application but if you run it in a webpart, you will get a access violation WebRequest request = WebRequest.Create("https://somesecuresite.com"); WebResponse firstResponse = null; try { firstResponse = request.GetResponse(); } catch (WebException ex) { writer.WriteLine("Error: " + ex.ToString()); return; } if you access a non secure site, it also works. Any ideas? Error: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at System.Net.UnsafeNclNativeMethods.NativePKI.CertVerifyCertificateChainPolicy(IntPtr policy, SafeFreeCertChain chainContext, ChainPolicyParameter& cpp, ChainPolicyStatus& ps) at System.Net.PolicyWrapper.VerifyChainPolicy(SafeFreeCertChain chainContext, ChainPolicyParameter& cpp) at System.Net.Security.SecureChannel.VerifyRemoteCertificate(RemoteCertValidationCallback remoteCertValidationCallback) at System.Net.Security.SslState.CompleteHandshake() at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • western digital caviar black. EXT4-fs error

    - by azat
    Recently I update my HDD on desktop machine, and bought WD Caviar Black. But after I format & copy information to it (using dd), and fix partitions size: I have next errors in kern.log: Aug 27 16:04:35 home-spb kernel: [148265.326264] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9054, 32254 clusters in bitmap, 32258 in gd Aug 27 16:07:11 home-spb kernel: [148421.493483] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9045, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.481693] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10299, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.487147] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.258711] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4345, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.277591] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.278202] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4344, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.284760] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.291983] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9051, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297495] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.297916] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9050, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297940] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.303213] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4425, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.312127] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.312487] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4424, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.317858] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.322231] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4336, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.326250] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.326599] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4335, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.332397] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.341957] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5764, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.350709] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.351127] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5763, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.355916] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.401055] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10063, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.404357] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.414699] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10073, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.420411] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.493933] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9059, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.493956] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. One time, machine rebooted (not manually), when I turn it on, it runs fsck on /dev/sdc2 and fix some errors and some files are missing on /dev/sdc2 I'v check /dev/sdc2 for badblocks, it doesn't have it ( using e2fsck -c /dev/sdc2 ) Here is the output of fsck http://pastebin.com/D5LmLVBY What else I can do to understand what's wrong here? BTW for /dev/sdc1 no message like that, in kern.log Linux version: 3.3.0 Distributive: Debian wheezy

    Read the article

  • Should I reformat XP with: Quick, regular, or "the current file-system"?

    - by Julie
    When reformatting, Windows XP ask me to choose from these formatting methods. (Implying that ALL of them are "formatting methods"... even #3) Reformat using NTFS (quick) Reformat using NTFS Leave the current file-system intact (no changes) What choice #3 really mean? Does it mean: A. Leave the current file-system (whatever file-system is already in use) and reformat to match that. (ie. If you current have NTFS, reformat to that again. If you currently have FAT32, reformat to that again. That is: Reformat without changing to a different file-system. Leave the current type.) or... B. Do absolutely nothing. Don't format. Don't delete any of my files. Abort the formatting process entirely.

    Read the article

  • How can you make an emacs macro wait for cscope query results?

    - by Sudhanshu
    I am trying to write a macro which calls cscope-find-functions-calling-this-function on each and every tag in a file displayed in the *Tags List* buffer (created by list-tags command). This should create a buffer which contains list of all functions calling a set of functions defined in a certain file. This is the sequence of keystrokes: 1. <f11> ;; cscope-find-functions-calling-this-function 2. RET ;; newline [shows results of cscope in a split window] 3. C-x C-p ;; mark-page 4. C-x C-x ;; icicle-exchange-point-and-mark 5. <up> ;; previous-line 6. <end> ;; end-of-line [region to copy has been marked] 7. <f7> ;; append-results-to-buffer 8. C-x ESC O ;; [move back to split window on the right] 9. C-x b ;; icicle-buffer [Switch back to *Tags List* buffer] 10. *Tags ;; self-insert-command * 5 11. SPC ;; self-insert-command 12. List* ;; self-insert-command * 5 13. RET ;; newline 14 . <down> ;; next-line [Position point on next tag in the list] Problem: I get no results in the buffer, and I found out that's because Step 3-7 execute even before cscope prints the results of query made on Steps 1-2. I can insert a pause in the macro by using C-x q, but I'd rather like the macro to wait after Step 2, until cscope has returned with the results and only then continue further. I suspect this is not possible through a macro, maybe a LISP function... I'm not a lisp expert myself. Can someone please help? Thanks! Details: I have Icicles installed so by default I get word at point in current buffer as input in minibuffer. F11 is bound to cscope-find-functions-calling-this-function windmove is installed and C-x (C-x ESC o - as shown below) takes you to the right window. F7 is bound to append-results-to-buffer which is defined as: (defun append-results-to-buffer () (interactive) (append-to-buffer (get-buffer-create "c1") (point) (mark))) This function just appends the currently marked region to a buffer named "c1".

    Read the article

  • Question about compilers and how they work

    - by Marin Doric
    This is the C code that frees memory of a singly linked list. It is compiled with Visual C++ 2008 and code works as it should be. /* Program done, so free allocated memory */ current = head; struct film * temp; temp = current; while (current != NULL) { temp = current->next; free(current); current = temp; } But I also encountered ( even in a books ) same code written like this: /* Program done, so free allocated memory */ current = head; while (current != NULL) { free(current); current = current->next; } If I compile that code with my VC++ 2008, program crashes because I am first freeing current and then assigning current-next to current. But obviously if I compile this code with some other complier ( for example, compiler that book author used ) program will work. So question is, why does this code compiled with specific compiler work? Is it because that compiler put instructions in binary file that remember address of current-next although I freed current and my VC++ doesn't. I just want to understand how compilers work.

    Read the article

  • Help writing getstring function

    - by volting
    Im having some trouble writing a getstring function, this is what I have so far. Regards, V const char* getstring() { char *buffer; int i = 255; buffer = (char *)malloc(i*sizeof(char)); *buffer = getchar(); while ( *buffer != '\n' ) { buffer++; *buffer = getchar(); } *buffer = '\0'; const char* _temp = buffer; return _temp; } int main() { char* temp = getstring(); for ( ;temp++ ; *temp != '\0') { printf("%c", *temp); } return 0; }

    Read the article

  • How to convert list into a string?

    - by PARIJAT
    I have extracted some data from the file and want to write it in the file 2 but the program says 'sequence item 1: expected string, list found', I want to know how I can convert buffer[] i.e. string into sequence, so that it could be saved in file 2. file = open('/ddfs/user/data/k/ktrip_01/hmm.txt','r') file2 = open('/ddfs/user/data/k/ktrip_01/hmm_write.txt','w') buffer = [] rec = file.readlines() for line in rec : field = line.split() print '>',field[0] term = field[0] buffer.append(term) print field[1], field[2], field[6], field[12] term1 = field [1] buffer.append(term1) term2 = field[2] buffer.append[term2] term3 = field[6] buffer.append[term3] term4 = field[12] buffer.append[term4] file2.write(buffer) file.close() file2.close()

    Read the article

  • Problem When Compressing File using vb.net

    - by Amr Elnashar
    I have File Like "Sample.bak" and when I compress it to be "Sample.zip" I lose the file extension inside the zip file I meann when I open the compressed file I find "Sample" without any extension. I use this code : Dim name As String = Path.GetFileName(filePath).Replace(".Bak", "") Dim source() As Byte = System.IO.File.ReadAllBytes(filePath) Dim compressed() As Byte = ConvertToByteArray(source) System.IO.File.WriteAllBytes(destination & name & ".Bak" & ".zip", compressed) Or using this code : Public Sub cmdCompressFile(ByVal FileName As String) 'Stream object that reads file contents Dim streamObj As Stream = New StreamReader(FileName).BaseStream 'Allocate space in buffer according to the length of the file read Dim buffer(streamObj.Length) As Byte 'Fill buffer streamObj.Read(buffer, 0, buffer.Length) streamObj.Close() 'File Stream object used to change the extension of a file Dim compFile As System.IO.FileStream = File.Create(Path.ChangeExtension(FileName, "zip")) 'GZip object that compress the file Dim zipStreamObj As New GZipStream(compFile, CompressionMode.Compress) 'Write to the Stream object from the buffer zipStreamObj.Write(buffer, 0, buffer.Length) zipStreamObj.Close() End Sub please I need to compress the file without loosing file extension inside compressed file. thanks,

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • Problem with C function of type char pointer, can someone explain?

    - by JJ
    Find the errors from following C function : char* f(int i) { int i; char buffer[20]; switch ( i ) { 1: strcpy( buffer, "string1"); 2: strcpy( buffer, "string2"); 3: strcpy( buffer, "string3"); default: strcpy(buffer, "defaultstring"); } return buffer; } this is c funtion not C++, I think it has to do with type conversion my compiler give warning that declaration of int i shadows a parameter.

    Read the article

  • How to map keys in vim differently for different kinds of buffers

    - by Yogesh Arora
    The problem i am facing is that i have mapped some keys and mouse events for seraching in vim while editing a file. But those mappings impact the functionality if the quickfix buffer. I was wondering if it is possible to map keys depending on the buffer in which they are used. EDIT - I am adding more info for this question Let us consider a scenario. I want to map <C-F4> to close a buffer/window. Now this behavior could depend on a number of things. If i am editing a buffer it should just close that buffer without changing the layout of the windows. I am using buffkil plugin for this. It does not depend on extension of file but on the type of buffer. I saw in vim documentation that there are unlisted and listed buffer. So if it is listed buffer it should close using bufkill commands. If it is not a listed buffer it should use <c-w>c command to close buffer and changing the window layout. I am new at writing vim functions/scripts, can someone help me getting started on this

    Read the article

  • can list be converted into string

    - by PARIJAT
    Actually i have extracted some data from the file and want to write it in the file 2 but the program says 'sequence item 1: expected string, list found', I want to know how i can convert buffer[] ie string into sequence, so that it could be saved in file 2...I am new to the python please help* file = open('/ddfs/user/data/k/ktrip_01/hmm.txt','r') file2 = open('/ddfs/user/data/k/ktrip_01/hmm_write.txt','w') buffer = [] rec = file.readlines() for line in rec : field = line.split() print '>',field[0] term = field[0] buffer.append(term) print field[1], field[2], field[6], field[12] term1 = field [1] buffer.append(term1) term2 = field[2] buffer.append[term2] term3 = field[6] buffer.append[term3] term4 = field[12] buffer.append[term4] file2.write(buffer) file.close() file2.close()

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Why don't I need to bind my vertex buffer object before calling glDrawArrays?

    - by valmo
    I'm a bit confused why this still renders. I thought you need to bind a vertex buffer object so that glDrawArrays knows which vertex buffer to use. Here is my initialisation code.. // Create and bind vertex array to store vertex attribute states. glGenVertexArraysOES(NUM_VERTEX_ARRAYS, &m_vertexArray); glBindVertexArrayOES(m_vertexArray); // Create and bind vertex buffer to store vertex data. glGenBuffers(NUM_VERTEX_BUFFERS, &m_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 36, &m_vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(VertexAttribPosition); glVertexAttribPointer(VertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(0)); glEnableVertexAttribArray(VertexAttribNormal); glVertexAttribPointer(VertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 24, BUFFER_OFFSET(12)); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArrayOES(0); Here is my render code. I'm confused why glDrawArrays still works when I bind 0 to GL_ARRAY_BUFFER. glBindVertexArrayOES(m_vertexArray); glBindBuffer(GL_ARRAY_BUFFER, 0); glDrawArrays(GL_TRIANGLES, 0, 36); glBindVertexArrayOES(0);

    Read the article

  • Replace Javascript click event with timed event?

    - by Rik
    Hi, I've found some javascript code that layers photos on top of each other when you click on them. Rather than having to click I'd like the function to automatically run every 5 seconds. How can I change this event to a timed one: $('a#nextImage, #image img').click(function(event){ Full code below. Thanks <script type="text/javascript"> $(document).ready(function(){ $('#description').css({'display':'block'}); $('#image img').hover(function() { $(this).addClass('hover'); }, function() { $(this).removeClass('hover'); }); $('a#nextImage, #image img').click(function(event){ event.preventDefault(); $('#description p:first-child').css({'visibility':'hidden'}); if($('#image img.current').next().length){ $('#image img.current').removeClass('current').next().fadeIn('normal').addClass('current').css({'position':'absolute'}); }else{ $('#image img').removeClass('current').css({'display':'none'}); $('#image img:first-child').fadeIn('normal').addClass('current').css({'position':'absolute'}); } if($('#image img.current').width()>=($('#page').width()-100)){ xPos=170; }else{ do{ xPos = 120 + (Math.floor(Math.random()*($('#page').width()-100))); }while(xPos+$('#image img.current').width()>$('#page').width()); } if($('#image img.current').height()>=300){ yPos=0; }else{ do{ yPos = Math.floor(Math.random()*300); }while(yPos+$('#image img.current').height()>300); } $('#image img.current').css({'left':xPos,'top':yPos}); }); });

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • PHP, when to use iterators, how to buffer results?

    - by Jon L.
    When is it best to use Iterators in PHP, and how can they be implemented to best avoid loading all objects into memory simultaneously? Do any constructs exist in PHP so that we can queue up results of an operation for use with an Iterator, while again avoiding loading all objects into memory simultaneously? An example would be a curl HTTP request against a REST server In the case of an HTTP request that returns all results at once (a la curl), would we be better off to go with streaming results, and if so, are there any limitations or pitfalls to be aware of? If using streaming, is it better to replace curl with a PHP native stream/socket? My intention is to implement Iterators for a REST client, and separately a document ORM that I'm maintaining, but only if I can do so while gaining benefits from reduced memory usage, increased performance, etc. Thanks in advance for any responses :-)

    Read the article

  • What is the primary use of Vertex Buffer Objects?

    - by sensae
    From what I've read, it seems VBOs are purely for performance. I'm working on a very rudimentary learning project in lwjgl and I'm just trying to figure out what more advanced features of the library I should be delving into, and what their use is. My understanding is that VBOs allow a person to keep vertexes in VRAM while they aren't currently being drawn in a scene. In my case, I'm just drawing quads and performance probably isn't a concern at all, but I'm trying to piece together what's happening under the hood. If I'm drawing quads directly, I'm drawing from the CPU memory, correct? Also, if I'm not doing any checks for visibility, does that mean I'm rendering absolutely everything in the "scene", regardless of whether its in view? Are VBOs a way to store objects and only render what's needed?

    Read the article

  • Can an Employer turn you down if you have said the fact about current work culture being bad [closed]

    - by MansonRix
    I had recently an interview where I scored good in 1st two round of technical interview . Then in the 3rd round was the managerial round where the guy started about my experience and whether I have vaptured any requirement and handled and trained any teams. This went pretty well for around 50 mins . Then there was the awkward question , Interviewer: why amI looking for a change? Me: coz I want to explore my carrier options? Interviewer: But your current company is big enough and you can explore options over there? (This was supposedly the trap) Me: Apart from that I am missing the flexibilty of working with Us and Europe based company as my current company is not that flexible. Interviewer: What exactly you don't find flexible. Me: The login time . Even if you get late by 1sec you might have to explin. Though this is not a big problem , still I will prefer flexibilty as we are working really hard. Interviewer: Allright ( Then couple of more questions) , Hope to C U Ya , that's pretty much it . Now I called up HR and they say , they are yet to get the feedback from Interviewer. Did I screw it? I mean does some one really have to pretend always by saying positive things about company and manager though not saying negative things?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >