Search Results

Search found 3612 results on 145 pages for '1 21 gigawatts'.

Page 10/145 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • /phpTest/zologize/axa.php? Another botnet?

    - by M132
    Starring at the log made me think, what is /phpTest/zologize/axa.php and why are bots looking for it? Previously, I had lots of /HNAP1/ requests. Requesting /HNAP1/ from IPs from log revealed, that all of them were sent by Linksys routers. 3 months later, these requests turned out to be generated by a router worm called TheMoon. But requesting /phpTest/zologize/axa.php from these servers returns a 404 error. How these servers got infected, and how can I protect mine from this? 124.11.224.69 - - [02/Feb/2014:00:37:16 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 140.113.238.121 - - [21/Feb/2014:01:24:32 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 77.121.132.79 - - [22/Feb/2014:00:03:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 142.4.201.210 - - [24/Feb/2014:21:54:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 212.83.168.39 - - [24/Feb/2014:23:16:00 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 87.117.229.210 - - [26/Feb/2014:06:34:58 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 78.100.82.99 - - [26/Feb/2014:08:25:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 198.50.205.219 - - [26/Feb/2014:09:59:11 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 210.60.142.107 - - [27/Feb/2014:00:12:12 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 101.109.4.73 - - [27/Feb/2014:08:50:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.91.128.158 - - [27/Feb/2014:08:59:15 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 201.188.41.175 - - [27/Feb/2014:11:25:42 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 220.133.137.2 - - [27/Feb/2014:12:12:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 203.156.104.88 - - [28/Feb/2014:18:11:49 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.19.52.58 - - [28/Feb/2014:22:02:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 84.2.92.40 - - [28/Feb/2014:23:04:17 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 58.64.205.11 - - [01/Mar/2014:06:08:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 113.61.200.151 - - [01/Mar/2014:18:25:25 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 178.33.219.12 - - [03/Mar/2014:14:41:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 74.63.220.132 - - [04/Mar/2014:01:16:44 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 187.141.230.106 - - [04/Mar/2014:15:39:26 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 103.22.181.146 - - [09/May/2014:17:16:56 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 502 166 "-" "-" 176.31.200.14 - - [10/May/2014:19:52:24 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 124.120.92.70 - - [12/May/2014:16:19:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 219.85.198.142 - - [15/May/2014:19:21:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 80.84.53.226 - - [23/May/2014:08:58:25 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 87.213.11.165 - - [25/May/2014:06:20:27 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 122.116.220.106 - - [25/May/2014:07:10:21 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.128.30 - - [29/May/2014:02:43:49 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 142.4.197.135 - - [29/May/2014:11:36:45 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.243.65 - - [30/May/2014:01:59:53 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.164.221 - - [30/May/2014:14:04:16 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 140.127.182.15 - - [01/Jun/2014:14:45:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 218.166.43.21 - - [01/Jun/2014:16:07:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.188.140 - - [01/Jun/2014:19:11:46 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 94.23.211.173 - - [05/Jun/2014:00:52:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 120.117.105.201 - - [05/Jun/2014:04:39:39 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 187.172.27.146 - - [05/Jun/2014:10:20:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 203.195.219.91 - - [05/Jun/2014:10:53:42 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-"

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE 2: Thanks to Steve, I have been able to install the USB Driver and debug the app directly on my Droid. Here is the LogCat output from clicking on my mic button: 03-08 18:36:45.686: INFO/ActivityManager(1017): Starting activity: Intent { act=android.speech.action.RECOGNIZE_SPEECH cmp=com.google.android.voicesearch/.IntentApiActivity (has extras) } 03-08 18:36:45.686: WARN/ActivityManager(1017): Activity is launching as a new task, so cancelling activity result. 03-08 18:36:45.787: DEBUG/NetworkLocationProvider(1017): setMinTime: 120000 03-08 18:36:45.889: INFO/ActivityManager(1017): Displayed activity com.google.android.voicesearch/.IntentApiActivity: 135 ms (total 135 ms) 03-08 18:36:45.905: DEBUG/NetworkLocationProvider(1017): onCellLocationChanged [802,0,0,4192,3] 03-08 18:36:45.951: INFO/MicrophoneInputStream(1429): Starting voice recognition with audio source VOICE_RECOGNITION 03-08 18:36:45.998: DEBUG/AudioHardwareMot(990): Codec sampling rate already 16000 03-08 18:36:46.092: INFO/RecognitionService(1429): ssfe url=http://www.google.com/m/voice-search 03-08 18:36:46.092: WARN/RecognitionService(1429): required parameter 'calling_package' is missing in IntentAPI request 03-08 18:36:46.115: DEBUG/AudioHardwareMot(990): Codec sampling rate already 16000 03-08 18:36:46.131: WARN/InputManagerService(1017): Starting input on non-focused client com.android.internal.view.IInputMethodClient$Stub$Proxy@4487d240 (uid=10090 pid=3132) 03-08 18:36:46.131: WARN/IInputConnectionWrapper(3132): showStatusIcon on inactive InputConnection 03-08 18:36:46.248: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.334: DEBUG/dalvikvm(3206): GC freed 3682 objects / 369416 bytes in 293ms 03-08 18:36:46.358: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.412: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.444: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.475: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.506: WARN/MediaPlayer(1429): info/warning (1, 44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) 03-08 18:36:46.514: INFO/MediaPlayer(1429): Info (1,44) The line that concerns me is the warning of the missing parameter calling-package. UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • problem in installing binutils

    - by user3667930
    when am trying to install mspgcc on ubuntu 14.04 version am getting an error at "make" during installation of binutils... following are the commands i used.. sir please help me in fixing this error.Thanks in advance.. wget http://ftpmirror.gnu.org/binutils/binutils-2.21.1a.tar.bz2 tar xvfj binutils-2.21.1a.tar.bz2 cd binutils-2.21.1 patch -p1 < ../mspgcc-20120406/msp430-binutils-2.21.1a-20120406.patch cd .. mkdir -p BUILD/binutils cd BUILD/binutils ../../binutils-2.21.1/configure --target=msp430 --program-prefix="msp430-" --with-mpfr-include=/usr/local/include -with-mpfr-lib=/usr/local/lib --with-gmp-include=/usr/local/include -with-gmp-lib=/usr/local/lib --with-mpc-include=/usr/local/include -with-mpc-lib=/usr/local/lib make -j 4 sudo make install cd ../..

    Read the article

  • APC not working as expected?

    - by Alix Axel
    I've the following function: function Cache($key, $value = null, $ttl = 60) { if (isset($value) === true) { apc_store($key, $value, intval($ttl)); } return apc_fetch($key); } And I'm testing it using the following code: Cache('ktime', time(), 3); // Store sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should Fetch sleep(5); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch And this is the output: string(21) "1273966771-1273966772" string(21) "1273966771-1273966777" string(21) "1273966771-1273966778" string(21) "1273966771-1273966779" Shouldn't it look like this: string(21) "1273966771-1273966772" string(21) "-1273966777" string(21) "-1273966778" string(21) "-1273966779" I don't understand, can anyone help me figure out this strange behavior?

    Read the article

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • Java Logger API

    - by Koppar
    This is a more like a tip rather than technical write up and serves as a quick intro for newbies. The logger API helps to diagnose application level or JDK level issues at runtime. There are 7 levels which decide the detailing in logging (SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST). Its best to start with highest level and as we narrow down, use more detailed logging for a specific area. SEVERE is the highest and FINEST is the lowest. This may not make sense until we understand some jargon. The Logger class provides the ability to stream messages to an output stream in a format that can be controlled by the user. What this translates to is, I can create a logger with this simple invocation and use it add debug messages in my class: import java.util.logging.*; private static final Logger focusLog = Logger.getLogger("java.awt.focus.KeyboardFocusManager"); if (focusLog.isLoggable(Level.FINEST)) { focusLog.log(Level.FINEST, "Calling peer setCurrentFocusOwner}); LogManager acts like a book keeper and all the getLogger calls are forwarded to LogManager. The LogManager itself is a singleton class object which gets statically initialized on JVM start up. More on this later. If there is no existing logger with the given name, a new one is created. If there is one (and not yet GC’ed), then the existing Logger object is returned. By default, a root logger is created on JVM start up. All anonymous loggers are made as the children of the root logger. Named loggers have the hierarchy as per their name resolutions. Eg: java.awt.focus is the parent logger for java.awt.focus.KeyboardFocusManager etc. Before logging any message, the logger checks for the log level specified. If null is specified, the log level of the parent logger will be set. However, if the log level is off, no log messages would be written, irrespective of the parent’s log level. All the messages that are posted to the Logger are handled as a LogRecord object.i.e. FocusLog.log would create a new LogRecord object with the log level and message as its data members). The level of logging and thread number are also tracked. LogRecord is passed on to all the registered Handlers. Handler is basically a means to output the messages. The output may be redirected to either a log file or console or a network logging service. The Handler classes use the LogManager properties to set filters and formatters. During initialization or JVM start up, LogManager looks for logging.properties file in jre/lib and sets the properties if the file is provided. An alternate location for properties file can also be specified by setting java.util.logging.config.file system property. This can be set in Java Control Panel ? Java ? Runtime parameters as -Djava.util.logging.config.file = <mylogfile> or passed as a command line parameter java -Djava.util.logging.config.file = C:/Sunita/myLog The redirection of logging depends on what is specified rather registered as a handler with JVM in the properties file. java.util.logging.ConsoleHandler sends the output to system.err and java.util.logging.FileHandler sends the output to file. File name of the log file can also be specified. If you prefer XML format output, in the configuration file, set java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter and if you prefer simple text, set set java.util.logging.FileHandler.formatter =java.util.logging.SimpleFormatter Below is the default logging Configuration file: ############################################################ # Default Logging Configuration File # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler # To also add the FileHandler, use the following line instead. #handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level= INFO ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ # default file output is in user's home directory. java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter # Limit the message that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # For example, set the com.xyz.foo logger to only log SEVERE # messages: com.xyz.foo.level = SEVERE Since I primarily use this method to track focus issues, here is how I get detailed awt focus related logging. Just set the logger name to java.awt.focus.level=FINEST and change the default log level to FINEST. Below is a basic sample program. The sample programs are from http://www2.cs.uic.edu/~sloan/CLASSES/java/ and have been modified to illustrate the logging API. By changing the .level property in the logging.properties file, one can control the output written to the logs. To play around with the example, try changing the levels in the logging.properties file and notice the difference in messages going to the log file. Example --------KeyboardReader.java------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; double num1, num2, product; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); System.out.println ("Enter a line of input"); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"The line entered is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO,"The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST, " Token " + numTokens + " is: " + s2); } } } } ----------MyFileReader.java---------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class MyFileReader extends KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input.file"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new FileReader ("MyFileReader.txt")); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"ATTN The line is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO, "The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST,"Breaking the line into tokens we get:"); mylog.log (Level.FINEST," Token " + numTokens + " is: " + s2); } } //end of while } // end of main } // end of class ----------MyFileReader.txt------------------------------------------------------------------------------------------ My first logging example -------logging.properties------------------------------------------------------------------------------------------- handlers= java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level= FINEST java.util.logging.FileHandler.pattern = java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.awt.focus.level=ALL ------Output log------------------------------------------------------------------------------------------- May 21, 2012 11:44:55 AM MyFileReader main SEVERE: ATTN The line is My first logging example May 21, 2012 11:44:55 AM MyFileReader main INFO: The line has 24 characters May 21, 2012 11:44:55 AM MyFileReader main FINE: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 1 is: My May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 2 is: first May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 3 is: logging May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 4 is: example Invocation command: "C:\Program Files (x86)\Java\jdk1.6.0_29\bin\java.exe" -Djava.util.logging.config.file=logging.properties MyFileReader References Further technical details are available here: http://docs.oracle.com/javase/1.4.2/docs/guide/util/logging/overview.html#1.0 http://docs.oracle.com/javase/1.4.2/docs/api/java/util/logging/package-summary.html http://www2.cs.uic.edu/~sloan/CLASSES/java/

    Read the article

  • C# acting weird when reading in values from a file to an array

    - by Whitey
    This is the structure of my file: 1111111111111111111111111 2222222222222222222222222 3333333333333333333333333 4444444444444444444444444 5555555555555555555555555 6666666666666666666666666 7777777777777777777777777 8888888888888888888888888 9999999999999999999999999 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 And this is the code I'm using to read it into an array: using (StreamReader reader = new StreamReader(mapPath)) { string line; for (int i = 0; i < iMapHeight; i++) { if ((line = reader.ReadLine()) != null) { for (int j = 0; j < iMapWidth; j++) { iMap[i, j] = line[j]; } } } } I have done some debugging, and line[j] correctly iterates through each character in the currently read line. The problem lies with iMap[i, j]. After this block of code executes, this is the contents of iMap: - iMap {int[14, 25]} int[,] [0, 0] 49 int [0, 1] 49 int [0, 2] 49 int [0, 3] 49 int [0, 4] 49 int [0, 5] 49 int [0, 6] 49 int [0, 7] 49 int [0, 8] 49 int [0, 9] 49 int [0, 10] 49 int [0, 11] 49 int [0, 12] 49 int [0, 13] 49 int [0, 14] 49 int [0, 15] 49 int [0, 16] 49 int [0, 17] 49 int [0, 18] 49 int [0, 19] 49 int [0, 20] 49 int [0, 21] 49 int [0, 22] 49 int [0, 23] 49 int [0, 24] 49 int [1, 0] 50 int [1, 1] 50 int [1, 2] 50 int [1, 3] 50 int [1, 4] 50 int [1, 5] 50 int [1, 6] 50 int [1, 7] 50 int [1, 8] 50 int [1, 9] 50 int [1, 10] 50 int [1, 11] 50 int [1, 12] 50 int [1, 13] 50 int [1, 14] 50 int [1, 15] 50 int [1, 16] 50 int [1, 17] 50 int [1, 18] 50 int [1, 19] 50 int [1, 20] 50 int [1, 21] 50 int [1, 22] 50 int [1, 23] 50 int [1, 24] 50 int [2, 0] 51 int [2, 1] 51 int [2, 2] 51 int [2, 3] 51 int [2, 4] 51 int [2, 5] 51 int [2, 6] 51 int [2, 7] 51 int [2, 8] 51 int [2, 9] 51 int [2, 10] 51 int [2, 11] 51 int [2, 12] 51 int [2, 13] 51 int [2, 14] 51 int [2, 15] 51 int [2, 16] 51 int [2, 17] 51 int [2, 18] 51 int [2, 19] 51 int [2, 20] 51 int [2, 21] 51 int [2, 22] 51 int [2, 23] 51 int [2, 24] 51 int [3, 0] 52 int [3, 1] 52 int [3, 2] 52 int [3, 3] 52 int [3, 4] 52 int [3, 5] 52 int [3, 6] 52 int [3, 7] 52 int [3, 8] 52 int [3, 9] 52 int [3, 10] 52 int [3, 11] 52 int [3, 12] 52 int [3, 13] 52 int [3, 14] 52 int [3, 15] 52 int [3, 16] 52 int [3, 17] 52 int [3, 18] 52 int [3, 19] 52 int [3, 20] 52 int [3, 21] 52 int [3, 22] 52 int [3, 23] 52 int [3, 24] 52 int [4, 0] 53 int [4, 1] 53 int [4, 2] 53 int [4, 3] 53 int [4, 4] 53 int [4, 5] 53 int [4, 6] 53 int [4, 7] 53 int [4, 8] 53 int [4, 9] 53 int [4, 10] 53 int [4, 11] 53 int [4, 12] 53 int [4, 13] 53 int [4, 14] 53 int [4, 15] 53 int [4, 16] 53 int [4, 17] 53 int [4, 18] 53 int [4, 19] 53 int [4, 20] 53 int [4, 21] 53 int [4, 22] 53 int [4, 23] 53 int [4, 24] 53 int [5, 0] 54 int [5, 1] 54 int [5, 2] 54 int [5, 3] 54 int [5, 4] 54 int [5, 5] 54 int [5, 6] 54 int [5, 7] 54 int [5, 8] 54 int [5, 9] 54 int [5, 10] 54 int [5, 11] 54 int [5, 12] 54 int [5, 13] 54 int [5, 14] 54 int [5, 15] 54 int [5, 16] 54 int [5, 17] 54 int [5, 18] 54 int [5, 19] 54 int [5, 20] 54 int [5, 21] 54 int [5, 22] 54 int [5, 23] 54 int [5, 24] 54 int [6, 0] 55 int [6, 1] 55 int [6, 2] 55 int [6, 3] 55 int [6, 4] 55 int [6, 5] 55 int [6, 6] 55 int [6, 7] 55 int [6, 8] 55 int [6, 9] 55 int [6, 10] 55 int [6, 11] 55 int [6, 12] 55 int [6, 13] 55 int [6, 14] 55 int [6, 15] 55 int [6, 16] 55 int [6, 17] 55 int [6, 18] 55 int [6, 19] 55 int [6, 20] 55 int [6, 21] 55 int [6, 22] 55 int [6, 23] 55 int [6, 24] 55 int [7, 0] 56 int [7, 1] 56 int [7, 2] 56 int [7, 3] 56 int [7, 4] 56 int [7, 5] 56 int [7, 6] 56 int [7, 7] 56 int [7, 8] 56 int [7, 9] 56 int [7, 10] 56 int [7, 11] 56 int [7, 12] 56 int [7, 13] 56 int [7, 14] 56 int [7, 15] 56 int [7, 16] 56 int [7, 17] 56 int [7, 18] 56 int [7, 19] 56 int [7, 20] 56 int [7, 21] 56 int [7, 22] 56 int [7, 23] 56 int [7, 24] 56 int [8, 0] 57 int [8, 1] 57 int [8, 2] 57 int [8, 3] 57 int [8, 4] 57 int [8, 5] 57 int [8, 6] 57 int [8, 7] 57 int [8, 8] 57 int [8, 9] 57 int [8, 10] 57 int [8, 11] 57 int [8, 12] 57 int [8, 13] 57 int [8, 14] 57 int [8, 15] 57 int [8, 16] 57 int [8, 17] 57 int [8, 18] 57 int [8, 19] 57 int [8, 20] 57 int [8, 21] 57 int [8, 22] 57 int [8, 23] 57 int [8, 24] 57 int [9, 0] 48 int [9, 1] 48 int [9, 2] 48 int [9, 3] 48 int [9, 4] 48 int [9, 5] 48 int [9, 6] 48 int [9, 7] 48 int [9, 8] 48 int [9, 9] 48 int [9, 10] 48 int [9, 11] 48 int [9, 12] 48 int [9, 13] 48 int [9, 14] 48 int [9, 15] 48 int [9, 16] 48 int [9, 17] 48 int [9, 18] 48 int [9, 19] 48 int [9, 20] 48 int [9, 21] 48 int [9, 22] 48 int [9, 23] 48 int [9, 24] 48 int [10, 0] 48 int [10, 1] 48 int [10, 2] 48 int [10, 3] 48 int [10, 4] 48 int [10, 5] 48 int [10, 6] 48 int [10, 7] 48 int [10, 8] 48 int [10, 9] 48 int [10, 10] 48 int [10, 11] 48 int [10, 12] 48 int [10, 13] 48 int [10, 14] 48 int [10, 15] 48 int [10, 16] 48 int [10, 17] 48 int [10, 18] 48 int [10, 19] 48 int [10, 20] 48 int [10, 21] 48 int [10, 22] 48 int [10, 23] 48 int [10, 24] 48 int [11, 0] 48 int [11, 1] 48 int [11, 2] 48 int [11, 3] 48 int [11, 4] 48 int [11, 5] 48 int [11, 6] 48 int [11, 7] 48 int [11, 8] 48 int [11, 9] 48 int [11, 10] 48 int [11, 11] 48 int [11, 12] 48 int [11, 13] 48 int [11, 14] 48 int [11, 15] 48 int [11, 16] 48 int [11, 17] 48 int [11, 18] 48 int [11, 19] 48 int [11, 20] 48 int [11, 21] 48 int [11, 22] 48 int [11, 23] 48 int [11, 24] 48 int [12, 0] 48 int [12, 1] 48 int [12, 2] 48 int [12, 3] 48 int [12, 4] 48 int [12, 5] 48 int [12, 6] 48 int [12, 7] 48 int [12, 8] 48 int [12, 9] 48 int [12, 10] 48 int [12, 11] 48 int [12, 12] 48 int [12, 13] 48 int [12, 14] 48 int [12, 15] 48 int [12, 16] 48 int [12, 17] 48 int [12, 18] 48 int [12, 19] 48 int [12, 20] 48 int [12, 21] 48 int [12, 22] 48 int [12, 23] 48 int [12, 24] 48 int [13, 0] 48 int [13, 1] 48 int [13, 2] 48 int [13, 3] 48 int [13, 4] 48 int [13, 5] 48 int [13, 6] 48 int [13, 7] 48 int [13, 8] 48 int [13, 9] 48 int [13, 10] 48 int [13, 11] 48 int [13, 12] 48 int [13, 13] 48 int [13, 14] 48 int [13, 15] 48 int [13, 16] 48 int [13, 17] 48 int [13, 18] 48 int [13, 19] 48 int [13, 20] 48 int [13, 21] 48 int [13, 22] 48 int [13, 23] 48 int [13, 24] 48 int Sorry for the lame formatting, but it's huge :P I have no idea where it's getting these values from, does anyone have an explanation? Thanks :)

    Read the article

  • custom route not working on windows

    - by Michael Closson
    My windows laptop is directly connected to 192.168.1.0/24 (wireless lan). I access 10.21.0.0/16 though a router that is connected to both networks. The routing works fine with this configuration. I have a VPN, that connects to 10.0.0.0/8. The VPN network doesn't actually use any IPs in the 10.21.0.0/16 range. So I should be able to configure my routing table to route all the 10.21.0.0/16 IPs through the wireless lan, and all other 10.0.0.0/8 through the VPN. My understanding is that I can do this if the metric for the 10.21.0.0 is lower than that of the 10.0.0.0. The VPN (10.0.0.0) is automatically assigned metric 20. I have manually assigned the WLAN a metric of 1. I manually add an entry to the routing table with this command: route add 10.21.0.0 mask 255.255.0.0 192.168.1.201 metric 1 The route is then assigned a metric of 2 (which is expected). The problem is that it doesn't work. I can't ping any machine on the 10.21.0.0 network. But I can access other stuff on the 10.0.0.0. I can also access stuff on the 192.168.1.0. To debug this i've done the following. Run tcpdump on the router (192.168.1.201). I can verify that no packets for 10.21.0.0 arrive on that interface. Disable iptables on the router. Disable the windows firewall. Run wireshark on my laptop, to try and see which interface the ping requests go to. But I can't see them go anywhere!! The ping command doesn't receive any 'destination unreachable' messages. Here is the relevant section of the routing table. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.201 192.168.1.18 2 10.0.0.0 255.0.0.0 On-link 10.55.44.203 20 10.21.0.0 255.255.0.0 192.168.1.201 192.168.1.18 2

    Read the article

  • ntpdate works, but ntpd can't synchronize

    - by dafydd
    This is in RHEL 5.5. First, ntpdate to the remote host works: $ ntpdate XXX.YYY.4.21 24 Oct 16:01:17 ntpdate[5276]: adjust time server XXX.YYY.4.21 offset 0.027291 sec Second, here are the server lines in my /etc/ntp.conf. All restrict lines have been commented out for troubleshooting. server 127.127.1.0 server XXX.YYY.4.21 I execute service ntpd start and check with ntpq: $ ntpq ntpq> peer remote refid st t when poll reach delay offset jitter ============================================================================== *LOCAL(0) .LOCL. 5 l 36 64 377 0.000 0.000 0.001 timeserver.doma .LOCL. 1 u 39 128 377 0.489 51.261 58.975 ntpq> opeer remote local st t when poll reach delay offset disp ============================================================================== *LOCAL(0) 127.0.0.1 5 l 40 64 377 0.000 0.000 0.001 timeserver.doma XXX.YYY.22.169 1 u 43 128 377 0.489 51.261 58.975 XXX.YYY.22.169 is the address of the host I'm working on. A reverse lookup on the IP address in my ntp.conf file validates that the ntpq output is correctly naming the remote server. However, as you can see, it appears to just roll over to my .LOCL. time server. Also, ntptrace just returns the local time server, and ntptrace XXX.YYY.4.21 times out. $ ntptrace localhost.localdomain: stratum 6, offset 0.000000, synch distance 0.948181 $ ntptrace XXX.YYY.4.21 XXX.YYY.4.21: timed out, nothing received ***Request timed out This looks like my ntp daemon is just querying itself. I am thinking about the possibility that the router-I-don't-control between my test network timeserver and the corporate network timeserver is blocking on source port. (I think ntpdate sends on port 123, which gets it around that filter and is why I can't use it while ntpd is running.) I have email in to the network folks to check that. Finally, telnet XXX.YYY.4.21 123 never times out or completes a connection. The questions: What am I missing, here? What else can I check to try to figure out where this connection is failing? Would strace ntptrace XXX.YYY.4.21 show me the source port ntptrace is sending from? I can deconstruct most strace calls, but I can't figure out the location of that datum. If I can't directly examine the gateway router between my test network and the timeserver, how might I build evidence that it's responsible for these disconnections? Alternately, how might I rule it out?

    Read the article

  • Problem obfuscating struts2 webapplication using Proguard / Tomcat SEVERE: Error filterStart ... org

    - by Xinus
    I am trying to obfuscate struts2 web application using struts2, I have separated out all action and servlet classes into separate jar file which does not take part in obfuscation, I am obfuscating everything else, I am using Proguard for obfuscation But for some reason tomcat is giving weired error as follows Apr 21, 2010 11:22:44 AM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Apr 21, 2010 11:22:44 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 607 ms Apr 21, 2010 11:22:44 AM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Apr 21, 2010 11:22:44 AM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.16 Enter the interceptor Apr 21, 2010 11:22:45 AM org.apache.catalina.core.StandardContext start SEVERE: Error filterStart Apr 21, 2010 11:22:45 AM org.apache.catalina.core.StandardContext start SEVERE: Context [/DataSubmissionToolFinal] startup failed due to previous errors log4j:ERROR LogMananger.repositorySelector was null likely due to error in class reloading, using NOPLoggerRepository. I do not have any I idea whats happening behind the scenes .. Any suggetions ?

    Read the article

  • Why are my Opteron cores running at only 75% capacity each? (25% CPU idle)

    - by Tim Cooper
    We've just taken delivery of a powerful 32-core AMD Opteron server with 128Gb. We have 2 x 6272 CPU's with 16 cores each. We are running a big long-running java task on 30 threads. We have the NUMA optimisations for Linux and java turned on. Our Java threads are mainly using objects that are private to that thread, sometimes reading memory that other threads will be reading, and very very occasionally writing or locking shared objects. We can't explain why the CPU cores are 25% idle. Below is a dump of "top": top - 23:06:38 up 1 day, 23 min, 3 users, load average: 10.84, 10.27, 9.62 Tasks: 676 total, 1 running, 675 sleeping, 0 stopped, 0 zombie Cpu(s): 64.5%us, 1.3%sy, 0.0%ni, 32.9%id, 1.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132138168k total, 131652664k used, 485504k free, 92340k buffers Swap: 5701624k total, 230252k used, 5471372k free, 13444344k cached ... top - 22:37:39 up 23:54, 3 users, load average: 7.83, 8.70, 9.27 Tasks: 678 total, 1 running, 677 sleeping, 0 stopped, 0 zombie Cpu0 : 75.8%us, 2.0%sy, 0.0%ni, 22.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 77.2%us, 1.3%sy, 0.0%ni, 21.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 77.3%us, 1.0%sy, 0.0%ni, 21.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 77.8%us, 1.0%sy, 0.0%ni, 21.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 76.9%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 76.3%us, 2.0%sy, 0.0%ni, 21.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 12.6%us, 3.0%sy, 0.0%ni, 84.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 8.6%us, 2.0%sy, 0.0%ni, 89.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 77.0%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 77.0%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 77.6%us, 1.7%sy, 0.0%ni, 20.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 75.7%us, 2.0%sy, 0.0%ni, 21.4%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 76.6%us, 2.3%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 76.6%us, 2.3%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 76.2%us, 2.6%sy, 0.0%ni, 15.9%id, 5.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 76.6%us, 2.0%sy, 0.0%ni, 21.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu16 : 73.6%us, 2.6%sy, 0.0%ni, 23.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu17 : 74.5%us, 2.3%sy, 0.0%ni, 23.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu18 : 73.9%us, 2.3%sy, 0.0%ni, 23.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu19 : 72.9%us, 2.6%sy, 0.0%ni, 24.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu20 : 72.8%us, 2.6%sy, 0.0%ni, 24.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu21 : 72.7%us, 2.3%sy, 0.0%ni, 25.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu22 : 72.5%us, 2.6%sy, 0.0%ni, 24.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu23 : 73.0%us, 2.3%sy, 0.0%ni, 24.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu24 : 74.7%us, 2.7%sy, 0.0%ni, 22.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu25 : 74.5%us, 2.6%sy, 0.0%ni, 22.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu26 : 73.7%us, 2.0%sy, 0.0%ni, 24.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu27 : 74.1%us, 2.3%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu28 : 74.1%us, 2.3%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu29 : 74.0%us, 2.0%sy, 0.0%ni, 24.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu30 : 73.2%us, 2.3%sy, 0.0%ni, 24.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu31 : 73.1%us, 2.0%sy, 0.0%ni, 24.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132138168k total, 131711704k used, 426464k free, 88336k buffers Swap: 5701624k total, 229572k used, 5472052k free, 13745596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13865 root 20 0 122g 112g 3.1g S 2334.3 89.6 20726:49 java 27139 jayen 20 0 15428 1728 952 S 2.6 0.0 0:04.21 top 27161 sysadmin 20 0 15428 1712 940 R 1.0 0.0 0:00.28 top 33 root 20 0 0 0 0 S 0.3 0.0 0:06.24 ksoftirqd/7 131 root 20 0 0 0 0 S 0.3 0.0 0:09.52 events/0 1858 root 20 0 0 0 0 S 0.3 0.0 1:35.14 kondemand/0 A dump of the java stack confirms that none of the threads are anywhere near the few places where locks are used, nor are they anywhere near any disk or network i/o. I had trouble finding a clear explanation of what 'top' means by "idle" versus "wait", but I get the impression that "idle" means "no more threads that need to be run" but this doesn't make sense in our case. We're using a "Executors.newFixedThreadPool(30)". There are a large number of tasks pending and each task lasts for 10 seconds or so. I suspect that the explanation requires a good understanding of NUMA. Is the "idle" state what you see when a CPU is waiting for a non-local access? If not, then what is the explanation?

    Read the article

  • How to implement an EventHandler to update controls

    - by Bill
    May I ask for help with the following? I am attempting to connect and control three pieces of household electronic equipment by computer through a GlobalCache GC-100 and iTach. As you will see in the following code, I created a class-instance of GlobalCacheAdapter that communicates with each piece of equipment. Although the code seems to work well in controlling the equipment, I am having trouble updating controls with the feedback from the equipment. The procedure "ReaderThreadProc" captures the feedback; however I don't know how to update the associated TextBox with the feedback. I believe that I need to create an EventHandler to notify the TextBox of the available update; however I am uncertain as to how an EventHandler like this would be implemented. Any help wold be greatly appreciated. using System; using System.IO; using System.Net; using System.Net.Sockets; using System.Threading; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { // Create three new instances of GlobalCacheAdaptor and connect. // GC-100 (Elan) 192.168.1.70 4998 // GC-100 (TuneSuite) 192.168.1.70 5000 // GC iTach (Lighting) 192.168.1.71 4999 private GlobalCacheAdaptor elanGlobalCacheAdaptor; private GlobalCacheAdaptor tuneSuiteGlobalCacheAdaptor; private GlobalCacheAdaptor lutronGlobalCacheAdaptor; public Form1() { InitializeComponent(); elanGlobalCacheAdaptor = new GlobalCacheAdaptor(); elanGlobalCacheAdaptor.ConnectToDevice(IPAddress.Parse("192.168.1.70"), 4998); tuneSuiteGlobalCacheAdaptor = new GlobalCacheAdaptor(); tuneSuiteGlobalCacheAdaptor.ConnectToDevice(IPAddress.Parse("192.168.1.70"), 5000); lutronGlobalCacheAdaptor = new GlobalCacheAdaptor(); lutronGlobalCacheAdaptor.ConnectToDevice(IPAddress.Parse("192.168.1.71"), 4999); elanTextBox.Text = elanGlobalCacheAdaptor._line; tuneSuiteTextBox.Text = tuneSuiteGlobalCacheAdaptor._line; lutronTextBox.Text = lutronGlobalCacheAdaptor._line; } private void btnZoneOnOff_Click(object sender, EventArgs e) { elanGlobalCacheAdaptor.SendMessage("sendir,4:3,1,40000,4,1,21,181,21,181,21,181,21,181,21,181,21,181,21,181,21,181,21,181,21,181,21,181,21,800" + Environment.NewLine); } private void btnSourceInput1_Click(object sender, EventArgs e) { elanGlobalCacheAdaptor.SendMessage("sendir,4:3,1,40000,1,1,20,179,20,179,20,179,20,179,20,179,20,179,20,179,20,278,20,179,20,179,20,179,20,780" + Environment.NewLine); } private void btnSystemOff_Click(object sender, EventArgs e) { elanGlobalCacheAdaptor.SendMessage("sendir,4:3,1,40000,1,1,20,184,20,184,20,184,20,184,20,184,20,286,20,286,20,286,20,184,20,184,20,184,20,820" + Environment.NewLine); } private void btnLightOff_Click(object sender, EventArgs e) { lutronGlobalCacheAdaptor.SendMessage("sdl,14,0,0,S2\x0d"); } private void btnLightOn_Click(object sender, EventArgs e) { lutronGlobalCacheAdaptor.SendMessage("sdl,14,100,0,S2\x0d"); } private void btnChannel31_Click(object sender, EventArgs e) { tuneSuiteGlobalCacheAdaptor.SendMessage("\xB8\x4D\xB5\x33\x31\x00\x30\x21\xB8\x0D"); } private void btnChannel30_Click(object sender, EventArgs e) { tuneSuiteGlobalCacheAdaptor.SendMessage("\xB8\x4D\xB5\x33\x30\x00\x30\x21\xB8\x0D"); } } } public class GlobalCacheAdaptor { public Socket _multicastListener; public string _preferredDeviceID; public IPAddress _deviceAddress; public Socket _deviceSocket; public StreamWriter _deviceWriter; public bool _isConnected; public int _port; public IPAddress _address; public string _line; public GlobalCacheAdaptor() { } public static readonly GlobalCacheAdaptor Instance = new GlobalCacheAdaptor(); public bool IsListening { get { return _multicastListener != null; } } public GlobalCacheAdaptor ConnectToDevice(IPAddress address, int port) { if (_deviceSocket != null) _deviceSocket.Close(); try { _port = port; _address = address; _deviceSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); _deviceSocket.Connect(new IPEndPoint(address, port)); ; _deviceAddress = address; var stream = new NetworkStream(_deviceSocket); var reader = new StreamReader(stream); var writer = new StreamWriter(stream) { NewLine = "\r", AutoFlush = true }; _deviceWriter = writer; writer.WriteLine("getdevices"); var readerThread = new Thread(ReaderThreadProc) { IsBackground = true }; readerThread.Start(reader); _isConnected = true; return Instance; } catch { DisconnectFromDevice(); MessageBox.Show("ConnectToDevice Error."); throw; } } public void SendMessage(string message) { try { var stream = new NetworkStream(_deviceSocket); var reader = new StreamReader(stream); var writer = new StreamWriter(stream) { NewLine = "\r", AutoFlush = true }; _deviceWriter = writer; writer.WriteLine(message); var readerThread = new Thread(ReaderThreadProc) { IsBackground = true }; readerThread.Start(reader); } catch { MessageBox.Show("SendMessage() Error."); } } public void DisconnectFromDevice() { if (_deviceSocket != null) { try { _deviceSocket.Close(); _isConnected = false; } catch { MessageBox.Show("DisconnectFromDevice Error."); } _deviceSocket = null; } _deviceWriter = null; _deviceAddress = null; } private void ReaderThreadProc(object state) { var reader = (StreamReader)state; try { while (true) { var line = reader.ReadLine(); if (line == null) break; _line = _line + line + Environment.NewLine; } // Need to create EventHandler to notify the TextBoxes to update with _line } catch { MessageBox.Show("ReaderThreadProc Error."); } } }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to DNAT to different local IP based on what public IP was accessed with Shorewall?

    - by mikl
    My server has several public IPs, and is running a bunch of virtual machines with private IP adresses. As an example, I want to map ports 80, 443 and 8080 on 232.21.23.23 (public) to 192.168.122.12 (private). I have tried a couple of different NAT mappings, but none of them seem to work: # This doesn't work. DNAT net loc:192.168.122.12 tcp 80,443,8080 - 232.21.23.23 # Neither does this. DNAT $FW loc:192.168.122.12 tcp 80,443,8080 - 232.21.23.23 # Nor this. DNAT net:232.21.23.23 loc:192.168.122.12 tcp 80,443,8080 # I have no idea what I'm doing. DNAT $FW:232.21.23.23 loc:192.168.122.12 tcp 80,443,8080 Can anyone point me in the right direction?

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • Down for everyone or just me?

    - by Click Ok
    When I try access a website, and it is down, I head to http://www.downforeveryoneorjustme.com, and test it. But lately, my home network PCs cannot access facebook.com, and I tried the that service and the answer was: It's just you. http://facebook.com is up. Ok, that got me. I tried several browsers and 3 PCs in my LAN and it don't works. I don't know how to troubleshoot this. What some step-by-step to troubleshoot that problem? Output from ping command: Disparando facebook.com [69.171.234.21] com 32 bytes de dados: Resposta de 69.171.234.21: bytes=32 tempo=256ms TTL=245 Resposta de 69.171.234.21: bytes=32 tempo=255ms TTL=246 Resposta de 69.171.234.21: bytes=32 tempo=251ms TTL=245 Resposta de 69.171.234.21: bytes=32 tempo=255ms TTL=246 PS.: I thank you for the nice help, but then I suppose that the first step of a step-by-step to troubleshoot is ping from command line?

    Read the article

  • MySQL at the DOAG Conference this week in Nuremberg

    - by Bertrand Matthelié
    Planning to attend the DOAG Conference this week in Nuremberg? There will be several MySQL presentations, including the three following ones from Oracle team members: Oracle GoldenGate: Bindeglied zwischen Oracle & MySQL Datenbanken Ileana Somesan Wednesday November 21, 12:00 NoSQL and SQL: Blending the Best of Both Worlds Andrew MorganWednesday November 21, 14:00 MySQL Replikation Carsten ThalheimerWednesday November 21,  16:00 We look forward to seeing you there!

    Read the article

  • Ubuntu 10.04 and fedora 14 grub conflict

    - by sawren
    I tried to triple boot Windows xp, Fedora 14 and Ubuntu 10.04. I first installed Windows xp, then fedora followed by Ubuntu. The problem is that i don't get option to boot Ubuntu while Xp boots fine. It seems Ubuntu was unable to replace Fedora's grub with its own at MBR. Looking at their grub conf file, Fedora and Ubuntu identifies same harddisk as two different devices and i do have another 80 GB harddisk which doesn't have any OS. Below is the details on my partitions and partial information from grub files of both OS. Device Boot Start End Blocks Id System /dev/sda1 * 63 40965749 20482843+ 7 HPFS/NTFS /dev/sda2 102414436 312576704 105081134+ f W95 Ext'd (LBA) /dev/sda3 40965750 102414374 30724312+ 83 Linux - /Home (for fedora) /dev/sda5 102414438 204812684 51199123+ 7 HPFS/NTFS /dev/sda6 204812748 253634219 24410736 83 Linux -- ubuntu /dev/sda7 253634283 302455754 24410736 83 Linux -- fedora /dev/sda8 302455818 312576704 5060443+ 82 Linux swap / Solaris grub.cfg from ubuntu ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 ro quiet splash initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd1,7)' search --no-floppy --fs-uuid --set cd55e078-a2c1-4d8a-9e87-ae838b6f4a05 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Professional (on /dev/sdb1)" { insmod ntfs set root='(hd1,1)' search --no-floppy --fs-uuid --set cad48cc6d48cb5eb drivemap -s (hd0) ${root} chainloader +1 } menuentry "Fedora (2.6.35.14-96.fc14.i686) (on /dev/sdb6)" { insmod ext2 set root='(hd1,6)' search --no-floppy --fs-uuid --set 6aee34cf-f77a-489a-9361-85d07194b84b linux /boot/vmlinuz-2.6.35.14-96.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.14-96.fc14.i686.img } menuentry "Fedora (2.6.35.6-45.fc14.i686) (on /dev/sdb6)" { insmod ext2 set root='(hd1,6)' search --no-floppy --fs-uuid --set 6aee34cf-f77a-489a-9361-85d07194b84b linux /boot/vmlinuz-2.6.35.6-45.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.6-45.fc14.i686.img } ### END /etc/grub.d/30_os-prober ### grub.conf from fedora default=0 timeout=5 splashimage=(hd0,5)/boot/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.35.14-96.fc14.i686) root (hd0,5) kernel /boot/vmlinuz-2.6.35.14-96.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.14-96.fc14.i686.img title Fedora (2.6.35.6-45.fc14.i686) root (hd0,5) kernel /boot/vmlinuz-2.6.35.6-45.fc14.i686 ro root=UUID=6aee34cf-f77a-489a-9361-85d07194b84b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /boot/initramfs-2.6.35.6-45.fc14.i686.img title Other rootnoverify (hd0,0) chainloader +1

    Read the article

  • two samba servers and one ldap backend

    - by user2346281
    I had setup a Samba 3 server as PDC with a passdb LDAP backend. Server SID: S-1-5-21-3270... Domain: A Every user has a SambaSID beginning with this Server SID. But now I try to setup a second server for some shares. This server should use the same LDAP backend because I don't want to have two LDAP backends. Otherwise I have to do modifcations (e.g. add users) twice. Second Server SID: S-1-5-21-3797... Domain: B But now when a user try to mount this new share I see this error in samba log: The primary group domain sid(S-1-5-21-3797....) does not match the domain sid(S-1-5-21-3270...) for xxx(S-1-5-21-3270...). I understand the problem but what can I do to avoid to maintain two LDAP backends? Regards, Simon

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • Log4j: Events appear in the wrong logfile

    - by Markus
    Hi there! To be able to log and trace some events I've added a LoggingHandler class to my java project. Inside this class I'm using two different log4j logger instances - one for logging an event and one for tracing an event into different files. The initialization block of the class looks like this: public void initialize() { System.out.print("starting logging server ..."); // create logger instances logLogger = Logger.getLogger("log"); traceLogger = Logger.getLogger("trace"); // create pattern layout String conversionPattern = "%c{2} %d{ABSOLUTE} %r %p %m%n"; try { patternLayout = new PatternLayout(); patternLayout.setConversionPattern(conversionPattern); } catch (Exception e) { System.out.println("error: could not create logger layout pattern"); System.out.println(e); System.exit(1); } // add pattern to file appender try { logFileAppender = new FileAppender(patternLayout, logFilename, false); traceFileAppender = new FileAppender(patternLayout, traceFilename, false); } catch (IOException e) { System.out.println("error: could not add logger layout pattern to corresponding appender"); System.out.println(e); System.exit(1); } // add appenders to loggers logLogger.addAppender(logFileAppender); traceLogger.addAppender(traceFileAppender); // set logger level logLogger.setLevel(Level.INFO); traceLogger.setLevel(Level.INFO); // start logging server loggingServer = new LoggingServer(logLogger, traceLogger, serverPort, this); loggingServer.start(); System.out.println(" done"); } To make sure that only only thread is using the functionality of a logger instance at the same time each logging / tracing method calls the logging method .info() inside a synchronized-block. One example looks like this: public void logMessage(String message) { synchronized (logLogger) { if (logLogger.isInfoEnabled() && logFileAppender != null) { logLogger.info(instanceName + ": " + message); } } } If I look at the log files, I see that sometimes a event appears in the wrong file. One example: trace 10:41:30,773 11080 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1267093 to vehicle 1055293 (slaveControl 1) trace 10:41:30,784 11091 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1156513 to vehicle 1105792 (slaveControl 1) trace 10:41:30,796 11103 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1104306 to vehicle 1055293 (slaveControl 1) trace 10:41:30,808 11115 INFO masterControl(192.168.2.21): vehicle 1327879 was pushed to slave control 1 10:41:30,808 11115 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1101572 to vehicle 106741 (slaveControl 1) trace 10:41:30,820 11127 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1055293 to vehicle 1104306 (slaveControl 1) I think that the problem occures everytime two event happen at the same time (here: 10:41:30,808). Does anybody has an idea how to solve my problem? I already tried to add a sleep() after the method call, but that doesn't helped ... BR, Markus Edit: logtrace 11:16:07,75511:16:07,755 1129711297 INFOINFO masterControl(192.168.2.21): string broadcast message was pushed from 1291400 to vehicle 1138272 (slaveControl 1)masterControl(192.168.2.21): vehicle 1333770 was added to slave control 1 or log 11:16:08,562 12104 INFO 11:16:08,562 masterControl(192.168.2.21): string broadcast message was pushed from 117772 to vehicle 1217744 (slaveControl 1) 12104 INFO masterControl(192.168.2.21): vehicle 1169775 was pushed to slave control 1 Edit 2: It seems like the problem only occurs if logging methods are called from inside a RMI thread (my client / server exchange information using RMI connections). ... Edit 3: I solved the problem by myself: It seems like log4j is NOT completely thread-save. After synchronizing all log / trace methods using a separate object everything is working fine. Maybe the lib is writing the messages to a thread-unsafe buffer before writing them to file?

    Read the article

  • Rails, if instance is in a scope?

    - by Joseph Silvashy
    I using rails 3 and I can't seem to check if a given instance is in a scope, see here: p = Post.find 6 +----+----------+-------------------------+-------------------------+-------------------------+-----------+ | id | title | publish_date | created_at | updated_at | published | +----+----------+-------------------------+-------------------------+-------------------------+-----------+ | 6 | asfdfdsa | 2010-03-28 22:33:00 UTC | 2010-03-28 22:33:46 UTC | 2010-03-28 22:33:46 UTC | true | +----+----------+-------------------------+-------------------------+-------------------------+-----------+ I have a menu scope which looks like: scope :menu, where("published != ?", false).limit(4) When I run it I get: Post.menu.all +----+------------------+------------------+------------------+-------------------+-----------+ | id | title | publish_date | created_at | updated_at | published | +----+------------------+------------------+------------------+-------------------+-----------+ | 1 | Lorem ipsum | 2010-03-23 07... | 2010-03-23 07... | 2010-03-28 21:... | true | | 2 | fdasf | 2010-03-28 21... | 2010-03-28 21... | 2010-03-28 21:... | true | | 3 | Ruby’s Imple... | 2010-03-28 21... | 2010-03-28 21... | 2010-03-28 21:... | true | | 4 | dsaD | 2010-03-28 22... | 2010-03-28 22... | 2010-03-28 22:... | true | +----+------------------+------------------+------------------+-------------------+-----------+ Which is correct, but if I try to check if p is in the the menu scope using: Post.menu.exists?(p) I get true when it should be false What is the proper way to find out if a given instance of something is in a scope?

    Read the article

  • JavaApplicationStub with SWT causing problems

    - by mystro
    I created an application in Eclipse that uses SWT for the GUI. I've attempted to deploy the application using the Eclipse deploy, but it seems that when I do that, LSUIElement is not respected, and I can't force the application to disappear from the dock. Nonwhistanding that issue, the application actually deploys ok and is runnable. I attempted to deploy the application using Jar Bundler, but when I try to run the application, I get the following errors: 2010-06-09 21:44:02.564 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x10021f260 of class NSCFString autoreleased with no pool in place - just leaking 2010-06-09 21:44:02.568 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x10010a0a0 of class NSCFNumber autoreleased with no pool in place - just leaking 2010-06-09 21:44:02.569 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x1001127a0 of class NSCFString autoreleased with no pool in place - just leaking 2010-06-09 21:44:02.582 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x7fff70b7af70 of class NSCFString autoreleased with no pool in place - just leaking 2010-06-09 21:44:02.583 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x100123ea0 of class NSCFData autoreleased with no pool in place - just leaking 2010-06-09 21:44:02.587 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x100225b90 of class NSCFDictionary autoreleased with no pool in place - just leaking 2010-06-09 21:44:02.588 JavaApplicationStub[89045:2003] * __NSAutoreleaseNoPool(): Object 0x100225ee0 of class __NSFastEnumerationEnumerator autoreleased with no pool in place - just leaking in a very, very, very, long list. The application launches and appears to hang with the icon constantly bouncing in the dock, and the first GUI menu only partially loaded (it looks like one of the text boxes is semi visible, and the overall rectangle is the right size, but the GUI is not showing properly. It is essentially hung.) I'm hoping someone has had experience with this problem, and may be able to help! Thanks!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >