Search Results

Search found 25253 results on 1011 pages for 'general log'.

Page 24/1011 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How to Enable IPtables TRACE Target on Debian Squeeze (6)

    - by bernie
    I am trying to use the TRACE target of IPtables but I can't seem to get any trace information logged. I want to use what is described here: Debugger for Iptables. From the iptables man for TRACE: This target marks packes so that the kernel will log every rule which match the packets as those traverse the tables, chains, rules. (The ipt_LOG or ip6t_LOG module is required for the logging.) The packets are logged with the string prefix: "TRACE: tablename:chain- name:type:rulenum " where type can be "rule" for plain rule, "return" for implicit rule at the end of a user defined chain and "policy" for the policy of the built in chains. It can only be used in the raw table. I use the following rule: iptables -A PREROUTING -t raw -p tcp -j TRACE but nothing is appended either in /var/log/syslog or /var/log/kern.log! Is there another step missing? Am I looking in the wrong place? edit Even though I can't find log entries, the TRACE target seems to be set up correctly since the packet counters get incremented: # iptables -L -v -t raw Chain PREROUTING (policy ACCEPT 193 packets, 63701 bytes) pkts bytes target prot opt in out source destination 193 63701 TRACE tcp -- any any anywhere anywhere Chain OUTPUT (policy ACCEPT 178 packets, 65277 bytes) pkts bytes target prot opt in out source destination edit 2 The rule iptables -A PREROUTING -t raw -p tcp -j LOG does print packet information to /var/log/syslog... Why doesn't TRACE work?

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • Concatenating gziped Apache logs

    - by markdrayton
    We rotate and compress our Apache logs each day but it's become apparent that this isn't frequently enough. An uncompressed log is about 6G, which is getting close to filling our log partition (yep, we'll make it bigger in the future!) as well as taking a lot of time and CPU to compress each day. We have to produce a gziped log for each day for our stats processing. Obviously we could move our logs to a partition with more space but I also want to spread the compression overhead throughout the day. Using Apache's rotatelogs we can rotate and compress the log more often -- hourly, say -- but how can I concatenate all the hourly compressed logs into a running compressed log for the day, without decompressing the previous logs? I don't want to uncompress 24 hours' worth of data and recompress it because that has all the disadvantages of our current solution. Gzip doesn't seem to offer any append or concatenate option but perhaps I've missed something obvious. This question suggests straight shell concatenation "works" in that the archive can be decompressed but that gzip -l doesn't work seems a bit dodgy. Alternatively, perhaps this is still a bad way to do things. Other suggestions are welcome -- our only constraints are our relatively small log partitions and the need to provide a daily compressed log.

    Read the article

  • When to log exception?

    - by Rune
    try { // Code } catch (Exception ex) { Logger.Log("Message", ex); throw; } In the case of a library, should I even log the exception? Should I just throw it and allow the application to log it? My concern is that if I log the exception in the library, there will be many duplicates (because the library layer will log it, the application layer will log it, and anything in between), but if I don't log it in the library, it'll be hard to track down bugs. Is there a best practices for this?

    Read the article

  • SH/BASH - Scan a log file until some text occurs, then exit. How??

    - by James
    Current working environment is OSX 10.4.11. My current script: #!/bin/sh tail -f log.txt | while read line do if echo $line | grep -q 'LOL CANDY'; then echo 'LOL MATCH FOUND' exit 0 fi done It works properly the first time, but on the second run and beyond it takes 2 occurrences of 'LOL CANDY' to appear before the script will exit, for whatever reason. And although I'm not sure it is specifically related, there is the problem of the "tail -f" staying open forever. Can someone please give me an example that will work without using tail -f? If you want you can give me a bash script, as OSX can handle sh, bash, and some other shells I think.

    Read the article

  • Log4j - Logging to multiple log files based on the project modules

    - by Veera
    Consider this scenario: I have a project with two modules and one common module as below (the package structure): com.mysite.moduleone com.mysite.moduletwo com.mysite.commonmodule In the above, the commonmodule classes can be used by other two modules. The question: I need to configureLog4J such a way that the log messages from moduleone and moduletwo goes to different log file. I can always do this using using category. But the real problem is when I want to log the messages from the commonmodule also. So, when the commonmodule classes are called from moduleone the commonmodule log messages should go to the moduleone log file. If the commonmodule is accesse from moduletwo the commonmodule log messages should go to moduletwo log file. Is it possible to configure Log4J in this fashion? Any comments? PS: I think I made my question clear. If any confusion, leave a comment, wil try to clear it. :)

    Read the article

  • How can I "git log" only code published to trunk?

    - by Russell Silva
    At my workplace we have a "master" trunk branch that represents published code. To make a change, I check out a working copy, create a topic branch, commit to the topic branch, merge the topic branch into master, and push. For small changes, I might commit directly to master, then push. My problem is that when I use "git log", I don't care about my topic branches in my local working copy. I only want to see the changes to the master branch on the remote, shared git server. What's more, if I use --stat or -p or one of their friends, I want to see the files and changes associated with the merge commit to master, not associated to their original branch commits (which, like I said, I don't want to see at all). How do I go about doing this?

    Read the article

  • storing crontab php outputs in a log file

    - by vick
    * * * * * php /home/admin/public_html/domain.com/public/cron/route.php &>> /home/admin/public_html/domain.com/log/cron.log I have that cron running every minute. I want to store the errors that occur in route.php in cron.log This works wonderfully when I run : php /home/admin/public_html/domain.com/public/cron/route.php &>> /home/admin/public_html/domain.com/log/cron.log through the command line manually. But when crontab runs it no errors gets stored in cron.log the cron.log is owned by admin:admin and the permissions are set to 777 just to be sure. anyone?

    Read the article

  • How can I change ruby log level in unit tests based on context

    - by Stuart
    I'm new to ruby so forgive me if this is simple or I get some terminology wrong. I've got a bunch of unit tests (actually they're integration tests for another project, but they use ruby test/unit) and they all include from a module that sets up an instance variable for the log object. When I run the individual tests I'd like log.level to be debug, but when I run a suite I'd like log.level to be error. Is it possible to do this with the approach I'm taking, or does the code need to be restructured? Here's a small example of what I have so far. The logging module: #!/usr/bin/env ruby require 'logger' module MyLog def setup @log = Logger.new(STDOUT) @log.level = Logger::DEBUG end end A test: #!/usr/bin/env ruby require 'test/unit' require 'mylog' class Test1 < Test::Unit::TestCase include MyLog def test_something @log.info("About to test something") # Test goes here @log.info("Done testing something") end end A test suite made up of all the tests in its directory: #!/usr/bin/env ruby Dir.foreach(".") do |path| if /it-.*\.rb/.match(File.basename(path)) require path end end

    Read the article

  • Append all logs to /var/log

    - by iCy
    Application scenario: I have the (normal/permanent) /var/log mounted on an encrypted partition (/dev/LVG/log). /dev/LVG/log is not accessible at boot time, it needs to be manually activated later by su from ssh. A RAM drive (using tmpfs) is mounted to /var/log at init time (in rc.local). Once /dev/LVG/log is activated, I need a good way of appending everything in the tmpfs to /dev/LVG/log, before mounting it as /var/log. Any recommendations on what would be a good way of doing so? Thanks in advance!

    Read the article

  • How do I get points on a curve in PHP with log()?

    - by Erick
    I have a graph I am trying to replicate: I have the following PHP code: $sale_price = 25000; $future_val = 5000; $term = 60; $x = $sale_price / $future_val; $pts = array(); $pts[] = array($x,0); for ($i=1; $i<=$term; $i++) { $y = log($x+0.4)+2.5; $pts[] = array($i,$y); echo $y . " <br>\n"; } How do I make the code work to give me the points along the lower line (between the yellow and blue areas)? It doesn't need to be exact, just somewhat close. The formula is: -ln(x+.4)+2.5 I got that by using the Online Function Grapher at http://www.livephysics.com/ Thanks in advance!!

    Read the article

  • Log call information whenever there is a phone call.

    - by linuxdoniv
    Hi, I have written the android application and I want the application to send the call information whenever there is an incoming call and it ends. This way I would be sending all calls to the server irrespective of size of the call log. Here is the code public class PhoneInfo extends BroadcastReceiver { private int incoming_call = 0; private Cursor c; Context context; public void onReceive(Context con, Intent intent) { c = con.getContentResolver().query( android.provider.CallLog.Calls.CONTENT_URI, null, null, null, android.provider.CallLog.Calls.DATE+ " DESC"); context = con; IncomingCallListener phoneListener=new IncomingCallListener(); TelephonyManager telephony = (TelephonyManager) con.getSystemService(Context.TELEPHONY_SERVICE); telephony.listen(phoneListener,PhoneStateListener.LISTEN_CALL_STATE); } public class IncomingCallListener extends PhoneStateListener { public void onCallStateChanged(int state,String incomingNumber){ switch(state){ case TelephonyManager.CALL_STATE_IDLE: if(incoming_call == 1){ CollectSendCallInfo(); incoming_call = 0; } break; case TelephonyManager.CALL_STATE_OFFHOOK: break; case TelephonyManager.CALL_STATE_RINGING: incoming_call = 1; break; } } } private void CollectSendCallInfo() { int numberColumn = c.getColumnIndex( android.provider.CallLog.Calls.NUMBER); int dateColumn = c.getColumnIndex( android.provider.CallLog.Calls.DATE); int typeColumn = c.getColumnIndex( android.provider.CallLog.Calls.TYPE); int durationColumn=c.getColumnIndex( android.provider.CallLog.Calls.DURATION); ArrayList<String> callList = new ArrayList<String>(); try{ boolean moveToFirst=c.moveToFirst(); } catch(Exception e) { ; // could not move to the first row. return; } int row_count = c.getCount(); int loop_index = 0; int is_latest_call_read = 0; String callerPhonenumber = c.getString(numberColumn); int callDate = c.getInt(dateColumn); int callType = c.getInt(typeColumn); int duration=c.getInt(durationColumn); while((loop_index <row_count) && (is_latest_call_read != 1)){ switch(callType){ case android.provider.CallLog.Calls.INCOMING_TYPE: is_latest_call_read = 1; break; case android.provider.CallLog.Calls.MISSED_TYPE: break; case android.provider.CallLog.Calls.OUTGOING_TYPE: break; } loop_index++; c.moveToNext(); } SendCallInfo(callerPhonenumber, Integer.toString(duration), Integer.toString(callDate)); } private void SendCallInfo(String callerPhonenumber, String callDuration, String callDate) { JSONObject j = new JSONObject(); try { j.put("Caller", callerPhonenumber); j.put("Duration", callDuration); j.put("CallDate", callDate); } catch (JSONException e) { Toast.makeText(context, "Json object failure!", Toast.LENGTH_LONG).show(); } String url = "http://xxxxxx.xxx.xx/xxxx/xxx.php"; Map<String, String> kvPairs = new HashMap<String, String>(); kvPairs.put("phonecall", j.toString()); HttpResponse re; try { re = doPost(url, kvPairs); String temp; try { temp = EntityUtils.toString(re.getEntity()); if (temp.compareTo("SUCCESS") == 0) { ; } else ; } catch (ParseException e1) { Toast.makeText(context, "Parse Exception in response!", Toast.LENGTH_LONG) .show(); e1.printStackTrace(); } catch (IOException e1) { Toast.makeText(context, "Io exception in response!", Toast.LENGTH_LONG).show(); e1.printStackTrace(); } } catch (ClientProtocolException e1) { Toast.makeText(context, "Client Protocol Exception!", Toast.LENGTH_LONG).show(); e1.printStackTrace(); } catch (IOException e1) { Toast.makeText(context, "Client Protocol Io exception!", Toast.LENGTH_LONG).show(); e1.printStackTrace(); } } and here is the manifest file <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"></uses-permission> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"></uses-permission> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS"></uses-permission> <uses-permission android:name="android.permission.INSTALL_LOCATION_PROVIDER"></uses-permission> <uses-permission android:name="android.permission.SET_DEBUG_APP"></uses-permission> <uses-permission android:name="android.permission.RECEIVE_SMS"></uses-permission> <uses-permission android:name="android.permission.READ_PHONE_STATE"></uses-permission> <uses-permission android:name="android.permission.READ_SMS"></uses-permission> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".Friend" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".LoginInfo" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.DEFAULT" /> </intent-filter> </activity> <service android:exported="true" android:enabled="true" android:name=".GeoUpdateService" > </service> <receiver android:name=".SmsInfo" > <intent-filter> <action android:name= "android.provider.Telephony.SMS_RECEIVED" /> </intent-filter> </receiver> <receiver android:name=".PhoneInfo" > <intent-filter> <action android:name="android.intent.action.PHONE_STATE"></action> </intent-filter> </receiver> </application> The application just crashes when there is an incoming call.. i have been able to log the information about incoming SMS, but this call info logging is failing. Thanks for any help.

    Read the article

  • Leveraging ERP Investments with EPM and BI Solutions

    - by john.orourke(at)oracle.com
    Now that many organizations have implemented ERP systems to automate and integrate their operational processes, IT investments are beginning to shift to the management systems i.e. EPM and BI tools and applications that integrate data from multiple transactional systems.  These solutions automate and integrate the management processes and enable organizations to achieve "management excellence" becoming smarter, more agile and more aligned than their competitors.  In fact the results of a recent IDC survey indicate that "Organizations that have implemented performance management more broadly are nearly four times more likely to be among the most competitive organizations in their industry."  One example of an organization that is leveraging their ERP investments with Oracle EPM and BI solutions is General Dynamics.  The Business Intelligence Collaborative (BIC) group within General Dynamics' IT organization assists various business units with the implementation, application support, and application hosting for their Business Intelligence and Enterprise Performance Management Applications.  Attend the Oracle Virtual Trade Show "Spotlight on Customer Success" on February 3rd to hear the details of how General Dynamics is using Oracle Essbase, Hyperion Planning, and Oracle BI to improve their planning, reporting and analysis processes and leverage their investments in Oracle E-Business Suite and other operational systems.   During the event, you can also hear about the latest developments and plans for Oracle Applications products, as well as what's coming with Oracle Fusion Applications. Here's a link to the Virtual Trade Show event overview and registration page.  The event runs from 8AM - 1PM PST/11AM - 4PM EST, and the EPM session is 10:30 - 11AM PST/1:30 - 2PM EST.    http://event.on24.com/event/26/79/15/rt/opFb.html?partnerref=internal I hope you'll join us on February 3rd!  

    Read the article

  • Ruby/Rails display general screen when modifications being performed on server

    - by john chan
    I have a ruby on rails app running a server and sometimes it needs to be taken down for updates/etc. As of now, one way I see to have a general display screen during update periods (when the app is down) is to substitute the files within /srv/www/ directory to just have it display a general screen everywhere that the user could possibly navigate to. I also thought of having a central controller file that connects all others (essentially a main) but this seems counter intuitive for rails. There are many external links to these different components of the site that the user could navigate to from outside and I need to make sure that they always receive this general update screen when the app is taken down for a little. I was wondering if anyone had any other ideas.... maybe a library or something like that, I can't seem to find anything online. any suggestions would be appreciated. Thanks

    Read the article

  • Efficient algorithm for finding largest eigenpair of small general complex matrix

    - by mklassen
    I am looking for an efficient algorithm to find the largest eigenpair of a small, general (non-square, non-sparse, non-symmetric), complex matrix, A, of size m x n. By small I mean m and n is typically between 4 and 64 and usually around 16, but with m not equal to n. This problem is straight forward to solve with the general LAPACK SVD algorithms, i.e. gesvd or gesdd. However, as I am solving millions of these problems and only require the largest eigenpair, I am looking for a more efficient algorithm. Additionally, in my application the eigenvectors will generally be similar for all cases. This lead me to investigate Arnoldi iteration based methods, but I have neither found a good library nor algorithm that applies to my small general complex matrix. Is there an appropriate algorithm and/or library?

    Read the article

  • PHPMyAdmin: "General relation features: Disabled"

    - by Simón
    I've been looking around for something like this for a while, and I've found some tips on similar issues, but not exactly the same. I really don't know what to do. I downloaded and installed WAMP, and I have a MySQL and PHPMyAdmin setup according to common indications that can be found everywhere (securing MySQL root account, etc.). When I log into PHPMyAdmin (either as root or as pma), I see the following message at the bottom of the page: The additional features for working with linked tables have been deactivated. To find out why click here. And when following the link, got a page with the following: Server: localhost $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... OK General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... OK Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... OK $cfg['Servers'][$i]['pdf_pages'] ... OK Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... OK Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... OK SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... OK Designer: Disabled Somebody please explain to me, why the heck if all settings are "OK" the features remain "Disabled"? Note: at first all the settings were "not OK" and I managed to add the settings to config.inc.php, and then created the tables using scripts/create_tables.php. Of course I have already tried restarting the server or clearing the browser cache (several times, so I am sure the problem comes elsewhere).

    Read the article

  • Advice on Computer Specs for overall development/general use machine

    - by Ender
    At the moment I am restricted to a laptop with 512MB of RAM, a 120GB HDD and a 1.5GHz Intel processor for all my development and general browsing needs, and as you can probably tell using it for anything modern is a painful experience. As a result I've decided to buy myself a new desktop computer, one that will stand the test of time and one that can be upgraded easily. Rather than build the machine myself I've decided to go through Dell as I've had good experiences with them when purchasing computers for my family. I've had my eye on this as it's got a good amount of RAM, has a decent-rated processor and isn't priced too badly. http://www1.euro.dell.com/uk/en/home/Desktops/inspiron-580/pd.aspx?refid=inspiron-580&s=dhs&cs=ukepp1&~oid=uk~en~20211~inspiron-580_d005827~~ Intel® Core™ i5 Processor 750 (2.66GHz, 8MB) Genuine Windows® 7 Home Premium 64bit - English Display Not Included ATI Radeon™ HD 5450 1GB DDR3 graphics 6144MB Dual Channel DDR3 [3x2048] Memory 1TB (7200rpm) SATA Hard Drive DVD +/- RW Drive (read/write CD & DVD) with DVD Burn software 1 year of coverage included with your PC McAfee® Security Centre - 15 Month Protection - English After the pain of using a slow laptop for all this time the main thing I want is speed. I may look to play a couple of basic games on it, nothing too powerful. Obviously I'll be doing some development on it too so it'll have to be able to handle the latest IDE's and Database tools like SQL Server pretty quickly. Finally, should I ever need to improve it I'd like to be able to add more RAM and change some of the parts. I wouldn't have thought this would be a problem but a few people I've spoken to have said that the amount of RAM the motherboard can handle isn't that great. Is this true? How long can I expect to be using this computer before it's too slow? Thanks in advance for the help.

    Read the article

  • Logging Redirect Location in Apache

    - by matthew
    Using the standard "Combined" log format, when Apache returns a 301 response it logs it in the access log, which is good, but it only logs the response code. It doesn't log the location to which the client is being directed. What argument do I need to add to the CustomLog format in order to get it to log this information? It wasn't obvious to me from the documentation.

    Read the article

  • Logging Output of Azure Startup Tasks to the Event Log

    - by Your DisplayName here!
    This can come in handy when troubleshooting: using System; using System.Diagnostics; using System.Text;   namespace Thinktecture.Azure {     class Program     {         static EventLog _eventLog = new EventLog("Application", ".", "StartupTaskShell");         static StringBuilder _out = new StringBuilder(64);         static StringBuilder _err = new StringBuilder(64);           static int Main(string[] args)         {             if (args.Length != 1)             {                 Console.WriteLine("Invalid arguments: " + String.Join(", ", args));                 _eventLog.WriteEntry("Invalid arguments: " + String.Join(", ", args));                                 return -1;             }               var task = args[0];               ProcessStartInfo info = new ProcessStartInfo()             {                 FileName = task,                 WorkingDirectory = Environment.CurrentDirectory,                 UseShellExecute = false,                 ErrorDialog = false,                 CreateNoWindow = true,                 RedirectStandardOutput = true,                 RedirectStandardError = true             };               var process = new Process();             process.StartInfo = info;               process.OutputDataReceived += (s, e) =>                 {                     if (e.Data != null)                     {                         _out.AppendLine(e.Data);                     }                 };             process.ErrorDataReceived += (s, e) =>                 {                     if (e.Data != null)                     {                         _err.AppendLine(e.Data);                     }                 };               process.Start();             process.BeginOutputReadLine();             process.BeginErrorReadLine();             process.WaitForExit();               var outString = _out.ToString();             var errString = _err.ToString();               if (!string.IsNullOrWhiteSpace(outString))             {                 outString = String.Format("Standard Out for {0}\n\n{1}", task, outString);                 _eventLog.WriteEntry(outString, EventLogEntryType.Information);             }               if (!string.IsNullOrWhiteSpace(errString))             {                 errString = String.Format("Standard Err for {0}\n\n{1}", task, errString);                 _eventLog.WriteEntry(errString, EventLogEntryType.Error);             }               return 0;         }     } } You then wrap your startup tasks with the StartupTaskShell and you’ll be able to see stdout and stderr in the application event log.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Setting log level of message at runtime in slf4j

    - by scompt.com
    When using log4j, the Logger.log(Priority p, Object message) method is available and can be used to log a message at a log level determined at runtime. We're using this fact and this tip to redirect stderr to a logger at a specific log level. slf4j doesn't have a generic log() method that I can find. Does that mean there's no way to implement the above?

    Read the article

  • Add stacktrace to every log in log4net

    - by tiagodias
    Hi all... I'm using Log4Net to log a multilayered enterprise application. I know that when i log with exception Log4Net automatically exposes the exception StackTarce, but i want to log the stacktrace for every log even if those are not exception throws. Why i need that?... Simply, i want to know the call origin of the log (drilldown the layers...) Thank all... Tiago Dias

    Read the article

  • to write log files in two different files

    - by Sun
    my application run on customized client framework,the client framework used log4net to log their own log files. we are(our application) has to use the same log4net to log our log files in our own path(say our customized path). currently the our log files are created but log are not writing in that file.it is writting in the client framework log file. searched lot of sites the link http://stackoverflow.com/questions/308436/log4net-programmatcially-specify-multiple-loggers-with-multiple-file-appenders helped me to configure the log4net config programatically, still im log statemets are not written in my log file.the code used as below public class TraceLog { private string message = string.Empty; private static ILog ILogger = null; private static TraceLog instance = new TraceLog(); private TraceLog() { SetLevel("Log4net.MainForm", "ALL"); AddAppender("Log4net.MainForm", CreateFileAppender("FileAppender", "C:\\mylog.log")); } public static TraceLog Instance { get { return instance; } } public void Debug(string logMessage) { message = PrepareLog(logMessage); ILogger.Debug(message); } protected string PrepareLog(string logMessage) { string message = GetFileMethodLineNumberInfo(); message += logMessage; return message; } protected string GetFileMethodLineNumberInfo() { StackTrace stackTrace = new StackTrace(true); // The position 3 is relative to the index of the specified method StackFrame stackFrame = stackTrace.GetFrame(3); return (stackFrame.GetMethod().DeclaringType.Name + "/" + stackFrame.GetMethod().Name + "/" + stackFrame.GetFileLineNumber() + ":"); } private static void SetLevel(string loggerName, string levelName) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.Level = l.Hierarchy.LevelMap[levelName]; } private static void AddAppender(string loggerName, IAppender appender) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.AddAppender(appender); } private static IAppender CreateFileAppender(string name, string fileName) { FileAppender appender = new FileAppender(); appender.Name = name; appender.File = fileName; appender.AppendToFile = true; //PatternLayout layout = new PatternLayout(); //layout.ConversionPattern = "%d [%t] %-5p %c [%x] - %m%n"; //layout.ActivateOptions(); //appender.Layout = layout; appender.ActivateOptions(); return appender; } } } anyone pls help how to solve this

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >