Search Results

Search found 18702 results on 749 pages for 'digital input'.

Page 320/749 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • mysql_query arguments in PHP

    - by Chris Wilson
    I'm currently building my first database in MySQL with an interface written in PHP and am using the 'learn-by-doing' approach. The figure below illustrates my database. Table names are at the top, and the attribute names are as they appear in the real database. I am attempting to query the values of each of these attributes using the code seen below the table. I think there is something wrong with my mysql_query() function since I am able to observe the expected behaviour when my form is successfully submitted, but no search results are returned. Can anyone see where I'm going wrong here? Update 1: I've updated the question with my enter script, minus the database login credentials. <html> <head> <title>Search</title> </head> <body> <h1>Search</h1> <!--Search form - get user input from this--> <form name = "search" action = "<?=$PHP_SELF?>" method = "get"> Search for <input type = "text" name = "find" /> in <select name = "field"> <option value = "Title">Title</option> <option value = "Description">Description</option> <option value = "City">Location</option> <option value = "Company_name">Employer</option> </select> <input type = "submit" name = "search" value = "Search" /> </form> <form name = "clearsearch" action = "Search.php"> <input type = "submit" value = "Reset search" /> </form> <?php if (isset($_GET["search"])) // Check if form has been submitted correctly { // Check for a search query if($_GET["find"] == "") { echo "<p>You did not enter a search query. Please press the 'Reset search' button and try again"; exit; } echo "<h2>Search results</h2>"; ?> <table align = "left" border = "1" cellspacing = "2" cellpadding = "2"> <tr> <th><font face="Arial, Helvetica, sans-serif">No.</font></th> <th><font face="Arial, Helvetica, sans-serif">Title</font></th> <th><font face="Arial, Helvetica, sans-serif">Employer</font></th> <th><font face="Arial, Helvetica, sans-serif">Description</font></th> <th><font face="Arial, Helvetica, sans-serif">Location</font></th> <th><font face="Arial, Helvetica, sans-serif">Date Posted</font></th> <th><font face="Arial, Helvetica, sans-serif">Application Deadline</font></th> </tr> <? // Connect to the database $username=REDACTED; $password=REDACTED; $host=REDACTED; $database=REDACTED; mysql_connect($host, $username, $password); @mysql_select_db($database) or die (mysql_error()); // Perform the search $find = mysql_real_escape_string($find); $query = "SELECT job.Title, job.Description, employer.Company_name, address.City, job.Date_posted, job.Application_deadline WHERE ( Title = '{$_GET['find']}' OR Company_name = '{$_GET['find']}' OR Date_posted = '{$_GET['find']}' OR Application_deadline = '{$_GET['find']}' ) AND job.employer_id_job = employer.employer_id AND job.address_id_job = address.address_id"; if (!$query) { die ('Invalid query:' .mysql_error()); } $result = mysql_query($query); $num = mysql_numrows($result); $count = 0; while ($count < $num) { $title = mysql_result ($result, $count, "Title"); $date_posted = mysql_result ($result, $count, "Date_posted"); $application_deadline = mysql_result ($result, $count, "Application_deadline"); $description = mysql_result ($result, $count, "Description"); $company = mysql_result ($result, $count, "Company_name"); $city = mysql_result ($result, $count, "City"); ?> <tr> <td><font face = "Arial, Helvetica, sans-serif"><? echo $count + 1; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $title; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $company; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $description; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $date_posted; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $application_deadline; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $education_level; ?></font></td> <td><font face = "Arial, Helvetica, sans-serif"><? echo $years_of_experience; ?></font></td> <? $count ++; } } ?> </body> </html>

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • How to save/retrieve words to/from SQlite database?

    - by user998032
    Sorry if I repeat my question but I have still had no clues of what to do and how to deal with the question. My app is a dictionary. I assume that users will need to add words that they want to memorise to a Favourite list. Thus, I created a Favorite button that works on two phases: short-click to save the currently-view word into the Favourite list; and long-click to view the Favourite list so that users can click on any words to look them up again. I go for using a SQlite database to store the favourite words but I wonder how I can do this task. Specifically, my questions are: Should I use the current dictionary SQLite database or create a new SQLite database to favorite words? In each case, what codes do I have to write to cope with the mentioned task? Could anyone there kindly help? Here is the dictionary code: package mydict.app; import java.util.ArrayList; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.util.Log; public class DictionaryEngine { static final private String SQL_TAG = "[MyAppName - DictionaryEngine]"; private SQLiteDatabase mDB = null; private String mDBName; private String mDBPath; //private String mDBExtension; public ArrayList<String> lstCurrentWord = null; public ArrayList<String> lstCurrentContent = null; //public ArrayAdapter<String> adapter = null; public DictionaryEngine() { lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); } public DictionaryEngine(String basePath, String dbName, String dbExtension) { //mDBExtension = getResources().getString(R.string.dbExtension); //mDBExtension = dbExtension; lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); this.setDatabaseFile(basePath, dbName, dbExtension); } public boolean setDatabaseFile(String basePath, String dbName, String dbExtension) { if (mDB != null) { if (mDB.isOpen() == true) // Database is already opened { if (basePath.equals(mDBPath) && dbName.equals(mDBName)) // the opened database has the same name and path -> do nothing { Log.i(SQL_TAG, "Database is already opened!"); return true; } else { mDB.close(); } } } String fullDbPath=""; try { fullDbPath = basePath + dbName + "/" + dbName + dbExtension; mDB = SQLiteDatabase.openDatabase(fullDbPath, null, SQLiteDatabase.OPEN_READWRITE|SQLiteDatabase.NO_LOCALIZED_COLLATORS); } catch (SQLiteException ex) { ex.printStackTrace(); Log.i(SQL_TAG, "There is no valid dictionary database " + dbName +" at path " + basePath); return false; } if (mDB == null) { return false; } this.mDBName = dbName; this.mDBPath = basePath; Log.i(SQL_TAG,"Database " + dbName + " is opened!"); return true; } public void getWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); int indexWordColumn = result.getColumnIndex("Word"); int indexContentColumn = result.getColumnIndex("Content"); if (result != null) { int countRow=result.getCount(); Log.i(SQL_TAG, "countRow = " + countRow); lstCurrentWord.clear(); lstCurrentContent.clear(); if (countRow >= 1) { result.moveToFirst(); String strWord = Utility.decodeContent(result.getString(indexWordColumn)); String strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(0,strWord); lstCurrentContent.add(0,strContent); int i = 0; while (result.moveToNext()) { strWord = Utility.decodeContent(result.getString(indexWordColumn)); strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(i,strWord); lstCurrentContent.add(i,strContent); i++; } } result.close(); } } public Cursor getCursorWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromId(int wordId) { String query; // encode input if (wordId <= 0) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE Id = " + wordId ; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromWord(String word) { String query; // encode input if (word == null || word.equals("")) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word = '" + word + "' LIMIT 0,1"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public void closeDatabase() { mDB.close(); } public boolean isOpen() { return mDB.isOpen(); } public boolean isReadOnly() { return mDB.isReadOnly(); } } And here is the code below the Favourite button to save to and load the Favourite list: btnAddFavourite = (ImageButton) findViewById(R.id.btnAddFavourite); btnAddFavourite.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Add code here to save the favourite, e.g. in the db. Toast toast = Toast.makeText(ContentView.this, R.string.messageWordAddedToFarvourite, Toast.LENGTH_SHORT); toast.show(); } }); btnAddFavourite.setOnLongClickListener(new View.OnLongClickListener() { @Override public boolean onLongClick(View v) { // Open the favourite Activity, which in turn will fetch the saved favourites, to show them. Intent intent = new Intent(getApplicationContext(), FavViewFavourite.class); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); getApplicationContext().startActivity(intent); return false; } }); }

    Read the article

  • PHP mail with multiple attachments and message in HTML format [closed]

    - by Jason
    I am new to PHP, so please don't mind if my question is silly. I would to like to make a PHP to send email with numerous attachments and the message of the email will be in HTML format. <html> <body> <form action="mail.php" method="post"> <table> <tr> <td><label>Name:</label></td> <td><input type="text" name="name" /></td> </tr> <tr> <td><label>Your Email:</label></td> <td><input type="text" name="email" /></td> </tr> <tr> <td><label>Attachment:</label></td> <td><input type="file" name="Attach" /></td> </tr> <tr> <td><input type="submit" value="Submit" /></td> </tr> </table> </form> <script language="javascript"> $(document).ready(function() { $("form").submit(function(){ $.ajax({ type: "POST", url: 'mail.php', dataType: 'json', data: { name: $('#name').val(), email: $('#email').val(), Attach: $('#Attach').val(), }, success: function(json){ $(".error, .success").remove(); if (json['error']){ $("form").after(json['error']); } if (json['success']){ $("form").remove(); $(".leftColWrap").append(json['success']); } } }); return false; }); }); </script> </body> </html> This is my HTML for filing in the information. And below is the mail.php <?php session_cache_limiter('nocache'); header('Expires: ' . gmdate('r', 0)); header('Content-type: application/json'); $timeout = time()+60*60*24*30; setcookie(Form, date("F jS - g:i a"), $timeout); $name=$_POST['name']; $email=$_POST['email']; $to="[email protected]"; //*** Uniqid Session ***// $Sid = md5(uniqid(time())); $headers = ""; $headers .= "From: $email \n"; $headers .= "MIME-Version: 1.0\n"; $headers .= "Content-Type: multipart/mixed; boundary=\"".$Sid."\"\n\n"; $headers .= "This is a multi-part message in MIME format.\n"; $headers .= "--".$Sid."\n"; $headers .= "Content-type: text/html; charset=utf-8\n"; $headers .= "Content-Transfer-Encoding: 7bit\n\n"; //*** Attachment ***// if($_FILES["fileAttach"]["name"] != "") { $FilesName = $_FILES["fileAttach"]["name"]; $Content = chunk_split(base64_encode(file_get_contents($_FILES["fileAttach"]["tmp_name"]))); $headers .= "--".$Sid."\n"; $headers .= "Content-Type: application/octet-eam; name=\"".$FilesName."\"\n"; $headers .= "Content-Transfer-Encoding: base64\n"; $headers .= "Content-Disposition: attachment; filename=\"".$FilesName."\"\n\n"; $headers .= $Content."\n\n"; } $message_to=" <html><body> <table class='page-head' align='center' width='100%'> <tr> <td class='left'> <h1>ABC</h1></td> <td class='right' width='63'> <img src='http://xxx/images/logo.png' /></td> </tr> </table><br /><br /> $name ($email) has just sent you an e-mail. </body></html>"; $message_from="<html><body> <table class='page-head' align='center' width='100%'> <tr> <td class='left'> <h1>ABC</h1></td> <td class='right' width='63'> <img src='http://xxx/images/logo.png' /></td> </tr> </table><br /><br /> Thanks for sending the email. </body></html>"; if ($name == "" || $email == "") { $error = "<font color=\"red\">Please fill in all the required fields.</font>"; } elseif (isset($_COOKIE['Form'])) { $error = "You have already sent the email. Please try again later."; } else { mail($to,"A new email from: $name",$message_to,$headers); mail($email,"Thank you for send the email",$message_from,$headers); $success = "Emai sent successfully!"; } $json = array('error' => $error, 'success' => $success); print(json_encode($json)); ?> May someone give some advises on the code? Thanks a lot.

    Read the article

  • Java Logger API

    - by Koppar
    This is a more like a tip rather than technical write up and serves as a quick intro for newbies. The logger API helps to diagnose application level or JDK level issues at runtime. There are 7 levels which decide the detailing in logging (SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST). Its best to start with highest level and as we narrow down, use more detailed logging for a specific area. SEVERE is the highest and FINEST is the lowest. This may not make sense until we understand some jargon. The Logger class provides the ability to stream messages to an output stream in a format that can be controlled by the user. What this translates to is, I can create a logger with this simple invocation and use it add debug messages in my class: import java.util.logging.*; private static final Logger focusLog = Logger.getLogger("java.awt.focus.KeyboardFocusManager"); if (focusLog.isLoggable(Level.FINEST)) { focusLog.log(Level.FINEST, "Calling peer setCurrentFocusOwner}); LogManager acts like a book keeper and all the getLogger calls are forwarded to LogManager. The LogManager itself is a singleton class object which gets statically initialized on JVM start up. More on this later. If there is no existing logger with the given name, a new one is created. If there is one (and not yet GC’ed), then the existing Logger object is returned. By default, a root logger is created on JVM start up. All anonymous loggers are made as the children of the root logger. Named loggers have the hierarchy as per their name resolutions. Eg: java.awt.focus is the parent logger for java.awt.focus.KeyboardFocusManager etc. Before logging any message, the logger checks for the log level specified. If null is specified, the log level of the parent logger will be set. However, if the log level is off, no log messages would be written, irrespective of the parent’s log level. All the messages that are posted to the Logger are handled as a LogRecord object.i.e. FocusLog.log would create a new LogRecord object with the log level and message as its data members). The level of logging and thread number are also tracked. LogRecord is passed on to all the registered Handlers. Handler is basically a means to output the messages. The output may be redirected to either a log file or console or a network logging service. The Handler classes use the LogManager properties to set filters and formatters. During initialization or JVM start up, LogManager looks for logging.properties file in jre/lib and sets the properties if the file is provided. An alternate location for properties file can also be specified by setting java.util.logging.config.file system property. This can be set in Java Control Panel ? Java ? Runtime parameters as -Djava.util.logging.config.file = <mylogfile> or passed as a command line parameter java -Djava.util.logging.config.file = C:/Sunita/myLog The redirection of logging depends on what is specified rather registered as a handler with JVM in the properties file. java.util.logging.ConsoleHandler sends the output to system.err and java.util.logging.FileHandler sends the output to file. File name of the log file can also be specified. If you prefer XML format output, in the configuration file, set java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter and if you prefer simple text, set set java.util.logging.FileHandler.formatter =java.util.logging.SimpleFormatter Below is the default logging Configuration file: ############################################################ # Default Logging Configuration File # You can use a different file by specifying a filename # with the java.util.logging.config.file system property. # For example java -Djava.util.logging.config.file=myfile ############################################################ ############################################################ # Global properties ############################################################ # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler # To also add the FileHandler, use the following line instead. #handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler # Default global logging level. # This specifies which kinds of events are logged across # all loggers. For any given facility this global level # can be overriden by a facility specific level # Note that the ConsoleHandler also has a separate level # setting to limit messages printed to the console. .level= INFO ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ # default file output is in user's home directory. java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter # Limit the message that are printed on the console to INFO and above. java.util.logging.ConsoleHandler.level = INFO java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ # For example, set the com.xyz.foo logger to only log SEVERE # messages: com.xyz.foo.level = SEVERE Since I primarily use this method to track focus issues, here is how I get detailed awt focus related logging. Just set the logger name to java.awt.focus.level=FINEST and change the default log level to FINEST. Below is a basic sample program. The sample programs are from http://www2.cs.uic.edu/~sloan/CLASSES/java/ and have been modified to illustrate the logging API. By changing the .level property in the logging.properties file, one can control the output written to the logs. To play around with the example, try changing the levels in the logging.properties file and notice the difference in messages going to the log file. Example --------KeyboardReader.java------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; double num1, num2, product; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); System.out.println ("Enter a line of input"); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"The line entered is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO,"The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST, " Token " + numTokens + " is: " + s2); } } } } ----------MyFileReader.java---------------------------------------------------------------------------------------- import java.io.*; import java.util.*; import java.util.logging.*; public class MyFileReader extends KeyboardReader { private static final Logger mylog = Logger.getLogger("samples.input.file"); public static void main (String[] args) throws java.io.IOException { String s1; String s2; // set up the buffered reader to read from the keyboard BufferedReader br = new BufferedReader (new FileReader ("MyFileReader.txt")); s1 = br.readLine(); if (mylog.isLoggable(Level.SEVERE)) { mylog.log (Level.SEVERE,"ATTN The line is " + s1); } if (mylog.isLoggable(Level.INFO)) { mylog.log (Level.INFO, "The line has " + s1.length() + " characters"); } if (mylog.isLoggable(Level.FINE)) { mylog.log (Level.FINE,"Breaking the line into tokens we get:"); } int numTokens = 0; StringTokenizer st = new StringTokenizer (s1); while (st.hasMoreTokens()) { s2 = st.nextToken(); numTokens++; if (mylog.isLoggable(Level.FINEST)) { mylog.log (Level.FINEST,"Breaking the line into tokens we get:"); mylog.log (Level.FINEST," Token " + numTokens + " is: " + s2); } } //end of while } // end of main } // end of class ----------MyFileReader.txt------------------------------------------------------------------------------------------ My first logging example -------logging.properties------------------------------------------------------------------------------------------- handlers= java.util.logging.ConsoleHandler, java.util.logging.FileHandler .level= FINEST java.util.logging.FileHandler.pattern = java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter java.awt.focus.level=ALL ------Output log------------------------------------------------------------------------------------------- May 21, 2012 11:44:55 AM MyFileReader main SEVERE: ATTN The line is My first logging example May 21, 2012 11:44:55 AM MyFileReader main INFO: The line has 24 characters May 21, 2012 11:44:55 AM MyFileReader main FINE: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 1 is: My May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 2 is: first May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 3 is: logging May 21, 2012 11:44:55 AM MyFileReader main FINEST: Breaking the line into tokens we get: May 21, 2012 11:44:55 AM MyFileReader main FINEST: Token 4 is: example Invocation command: "C:\Program Files (x86)\Java\jdk1.6.0_29\bin\java.exe" -Djava.util.logging.config.file=logging.properties MyFileReader References Further technical details are available here: http://docs.oracle.com/javase/1.4.2/docs/guide/util/logging/overview.html#1.0 http://docs.oracle.com/javase/1.4.2/docs/api/java/util/logging/package-summary.html http://www2.cs.uic.edu/~sloan/CLASSES/java/

    Read the article

  • Building and Deploying Windows Azure Web Sites using Git and GitHub for Windows

    - by shiju
    Microsoft Windows Azure team has released a new version of Windows Azure which is providing many excellent features. The new Windows Azure provides Web Sites which allows you to deploy up to 10 web sites  for free in a multitenant shared environment and you can easily upgrade this web site to a private, dedicated virtual server when the traffic is grows. The Meet Windows Azure Fact Sheet provides the following information about a Windows Azure Web Site: Windows Azure Web Sites enable developers to easily build and deploy websites with support for multiple frameworks and popular open source applications, including ASP.NET, PHP and Node.js. With just a few clicks, developers can take advantage of Windows Azure’s global scale without having to worry about operations, servers or infrastructure. It is easy to deploy existing sites, if they run on Internet Information Services (IIS) 7, or to build new sites, with a free offer of 10 websites upon signup, with the ability to scale up as needed with reserved instances. Windows Azure Web Sites includes support for the following: Multiple frameworks including ASP.NET, PHP and Node.js Popular open source software apps including WordPress, Joomla!, Drupal, Umbraco and DotNetNuke Windows Azure SQL Database and MySQL databases Multiple types of developer tools and protocols including Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix Signup to Windows and Enable Azure Web Sites You can signup for a 90 days free trial account in Windows Azure from here. After creating an account in Windows Azure, go to https://account.windowsazure.com/ , and select to preview features to view the available previews. In the Web Sites section of the preview features, click “try it now” which will enables the web sites feature Create Web Site in Windows Azure To create a web sites, login to the Windows Azure portal, and select Web Sites from and click New icon from the left corner  Click WEB SITE, QUICK CREATE and put values for URL and REGION dropdown. You can see the all web sites from the dashboard of the Windows Azure portal Set up Git Publishing Select your web site from the dashboard, and select Set up Git publishing To enable Git publishing , you must give user name and password which will initialize a Git repository Clone Git Repository We can use GitHub for Windows to publish apps to non-GitHub repositories which is well explained by Phil Haack on his blog post. Here we are going to deploy the web site using GitHub for Windows. Let’s clone a Git repository using the Git Url which will be getting from the Windows Azure portal. Let’s copy the Git url and execute the “git clone” with the git url. You can use the Git Shell provided by GitHub for Windows. To get it, right on the GitHub for Windows, and select open shell here as shown in the below picture. When executing the Git Clone command, it will ask for a password where you have to give password which specified in the Windows Azure portal. After cloning the GIT repository, you can drag and drop the local Git repository folder to GitHub for Windows GUI. This will automatically add the Windows Azure Web Site repository onto GitHub for Windows where you can commit your changes and publish your web sites to Windows Azure. Publish the Web Site using GitHub for Windows We can add multiple framework level files including ASP.NET, PHP and Node.js, to the local repository folder can easily publish to Windows Azure from GitHub for Windows GUI. For this demo, let me just add a simple Node.js file named Server.js which handles few request handlers. 1: var http = require('http'); 2: var port=process.env.PORT; 3: var querystring = require('querystring'); 4: var utils = require('util'); 5: var url = require("url"); 6:   7: var server = http.createServer(function(req, res) { 8: switch (req.url) { //checking the request url 9: case '/': 10: homePageHandler (req, res); //handler for home page 11: break; 12: case '/register': 13: registerFormHandler (req, res);//hamdler for register 14: break; 15: default: 16: nofoundHandler (req, res);// handler for 404 not found 17: break; 18: } 19: }); 20: server.listen(port); 21: //function to display the html form 22: function homePageHandler (req, res) { 23: console.log('Request handler home was called.'); 24: res.writeHead(200, {'Content-Type': 'text/html'}); 25: var body = '<html>'+ 26: '<head>'+ 27: '<meta http-equiv="Content-Type" content="text/html; '+ 28: 'charset=UTF-8" />'+ 29: '</head>'+ 30: '<body>'+ 31: '<form action="/register" method="post">'+ 32: 'Name:<input type=text value="" name="name" size=15></br>'+ 33: 'Email:<input type=text value="" name="email" size=15></br>'+ 34: '<input type="submit" value="Submit" />'+ 35: '</form>'+ 36: '</body>'+ 37: '</html>'; 38: //response content 39: res.end(body); 40: } 41: //handler for Post request 42: function registerFormHandler (req, res) { 43: console.log('Request handler register was called.'); 44: var pathname = url.parse(req.url).pathname; 45: console.log("Request for " + pathname + " received."); 46: var postData = ""; 47: req.on('data', function(chunk) { 48: // append the current chunk of data to the postData variable 49: postData += chunk.toString(); 50: }); 51: req.on('end', function() { 52: // doing something with the posted data 53: res.writeHead(200, "OK", {'Content-Type': 'text/html'}); 54: // parse the posted data 55: var decodedBody = querystring.parse(postData); 56: // output the decoded data to the HTTP response 57: res.write('<html><head><title>Post data</title></head><body><pre>'); 58: res.write(utils.inspect(decodedBody)); 59: res.write('</pre></body></html>'); 60: res.end(); 61: }); 62: } 63: //Error handler for 404 no found 64: function nofoundHandler(req, res) { 65: console.log('Request handler nofound was called.'); 66: res.writeHead(404, {'Content-Type': 'text/plain'}); 67: res.end('404 Error - Request handler not found'); 68: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If there is any change in the local repository folder, GitHub for Windows will automatically detect the changes. In the above step, we have just added a Server.js file so that GitHub for Windows will detect the changes. Let’s commit the changes to the local repository before publishing the web site to Windows Azure. After committed the all changes, you can click publish button which will publish the all changes to Windows Azure repository. The following screen shot shows deployment history from the Windows Azure portal.   GitHub for Windows is providing a sync button which can use for synchronizing between local repository and Windows Azure repository after making any commit on the local repository after any changes. Our web site is running after the deployment using Git Summary Windows Azure Web Sites lets the developers to easily build and deploy websites with support for multiple framework including ASP.NET, PHP and Node.js and can easily deploy the Web Sites using Visual Studio, Git, FTP, Visual Studio Team Foundation Services and Microsoft WebMatrix. In this demo, we have deployed a Node.js Web Site to Windows Azure using Git. We can use GitHub for Windows to publish apps to non-GitHub repositories and can use to publish Web SItes to Windows Azure.

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • installed openstack using devstack install shell script but getting 500 error when i try opening dashboard

    - by Arvind
    I followed the instructions at http://devstack.org/guides/single-machine.html to install OpenStack. I first installed Ubuntu on my Windows 7 PC using the officially supported Windows installer for Ubuntu 12.04 LTS. And after that I followed the instructions at above page to install OpenStack. As per instructions, I should be able to access the dashboard aka Horizon, at http://192.168.1.4/ (thats the IP of the PC on which I installed Ubuntu-OpenStack). However I am getting a 500 error web page when I open that. How do I resolve this error? I want to set up a dev environment for OpenStack. For your ref, the whole error message is given now-- FilterError at / /usr/bin/env: node: No such file or directory Request Method: GET Request URL: http://192.168.1.4/ Django Version: 1.4.2 Exception Type: FilterError Exception Value: /usr/bin/env: node: No such file or directory Exception Location: /usr/local/lib/python2.7/dist-packages/compressor/filters/base.py in input, line 133 Python Executable: /usr/bin/python Python Version: 2.7.3 Python Path: ['/opt/stack/horizon/openstack_dashboard/wsgi/../..', '/opt/stack/python-keystoneclient', '/opt/stack/python-novaclient', '/opt/stack/python-openstackclient', '/opt/stack/keystone', '/opt/stack/glance', '/opt/stack/python-glanceclient/setuptools_git-0.4.2-py2.7.egg', '/opt/stack/python-glanceclient', '/opt/stack/nova', '/opt/stack/horizon', '/opt/stack/cinder', '/opt/stack/python-cinderclient', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol', '/opt/stack/horizon/openstack_dashboard'] Server time: Sat, 27 Oct 2012 08:43:29 +0000 Error during template rendering In template /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html, error at line 3 /usr/bin/env: node: No such file or directory 1 {% load compress %} 2 3 {% compress css %} 4 <link href='{{ STATIC_URL }}dashboard/less/horizon.less' type='text/less' media='screen' rel='stylesheet' /> 5 {% endcompress %} 6 7 <link rel="shortcut icon" href="{{ STATIC_URL }}dashboard/img/favicon.ico"/> 8 Also, the traceback is now given below-- Environment: Request Method: GET Request URL: http://192.168.1.4/ Django Version: 1.4.2 Python Version: 2.7.3 Installed Applications: ('openstack_dashboard', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', 'compressor', 'horizon', 'openstack_dashboard.dashboards.project', 'openstack_dashboard.dashboards.admin', 'openstack_dashboard.dashboards.settings', 'openstack_auth') Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'horizon.middleware.HorizonMiddleware', 'django.middleware.doc.XViewMiddleware', 'django.middleware.locale.LocaleMiddleware') Template error: In template /opt/stack/horizon/openstack_dashboard/templates/_stylesheets.html, error at line 3 /usr/bin/env: node: No such file or directory 1 : {% load compress %} 2 : 3 : {% compress css %} 4 : <link href='{{ STATIC_URL }}dashboard/less/horizon.less' type='text/less' media='screen' rel='stylesheet' /> 5 : {% endcompress %} 6 : 7 : <link rel="shortcut icon" href="{{ STATIC_URL }}dashboard/img/favicon.ico"/> 8 : Traceback: File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py" in inner_func 36. response = func(*args, **kwargs) File "/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py" in splash 38. return shortcuts.render(request, 'splash.html', {'form': form}) File "/usr/local/lib/python2.7/dist-packages/django/shortcuts/__init__.py" in render 44. return HttpResponse(loader.render_to_string(*args, **kwargs), File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py" in render_to_string 176. return t.render(context_instance) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 140. return self._render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 823. bit = self.render_node(node, context) File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py" in render_node 74. return node.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py" in render 155. return self.render_template(self.template, context) File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py" in render_template 137. output = template.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 140. return self._render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/usr/local/lib/python2.7/dist-packages/django/template/base.py" in render 823. bit = self.render_node(node, context) File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py" in render_node 74. return node.render(context) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render 147. return self.render_compressed(context, self.kind, self.mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render_compressed 107. rendered_output = self.render_output(compressor, mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/templatetags/compress.py" in render_output 119. return compressor.output(mode, forced=forced) File "/usr/local/lib/python2.7/dist-packages/compressor/css.py" in output 51. ret.append(subnode.output(*args, **kwargs)) File "/usr/local/lib/python2.7/dist-packages/compressor/css.py" in output 53. return super(CssCompressor, self).output(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in output 230. content = self.filter_input(forced) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in filter_input 192. for hunk in self.hunks(forced): File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in hunks 167. precompiled, value = self.precompile(value, **options) File "/usr/local/lib/python2.7/dist-packages/compressor/base.py" in precompile 210. command=command, filename=filename).input(**kwargs) File "/usr/local/lib/python2.7/dist-packages/compressor/filters/base.py" in input 133. raise FilterError(err) Exception Type: FilterError at / Exception Value: /usr/bin/env: node: No such file or directory

    Read the article

  • Can I delete libc-bin?

    - by Balazs Szikszay
    Question is simple, I need to know because I cant upgrade/install anything, because it always says I have to uninstall/delete it to continue. It also says dont do it, if I dont know what I am doing. EDIT: szikszay@szikszay-Latitude-E5530-non-vPro:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed Depends: libqtgui4:i386 but it is not installed Depends: libqt4-dbus:i386 but it is not installed Depends: libqt4-network:i386 but it is not installed Depends: libqt4-opengl:i386 but it is not installed Depends: libqt4-qt3support:i386 but it is not installed Depends: libqt4-script:i386 but it is not installed Depends: libqt4-scripttools:i386 but it is not installed Depends: libqt4-sql:i386 but it is not installed Depends: libqt4-svg:i386 but it is not installed Depends: libqt4-test:i386 but it is not installed Depends: libqt4-xml:i386 but it is not installed Depends: libqt4-xmlpatterns:i386 but it is not installed Depends: libcups2:i386 but it is not installed Depends: libcupsimage2:i386 but it is not installed Depends: libcurl3:i386 but it is not installed Depends: libnss3:i386 but it is not installed Depends: libnspr4:i386 but it is not installed Depends: libssl1.0.0:i386 but it is not installed Recommends: libgl1-mesa-glx:i386 but it is not installed Recommends: libgl1-mesa-dri:i386 but it is not installed lib32ffi6 : Depends: libc6-i386 (= 2.4) but it is not installed lib32gcc1 : Depends: libc6-i386 (= 2.5) but it is not installed lib32nss-mdns : Depends: libc6-i386 (= 2.4) but it is not installed lib32stdc++6 : Depends: libc6-i386 (= 2.4) but it is not installed lib32z1 : Depends: libc6-i386 (= 2.4) but it is not installed libacl1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libattr1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libaudio2:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libavahi-client3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed Depends: libdbus-1-3:i386 (= 1.1.1) but it is not installed libavahi-common3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libcomerr2:i386 : Depends: libc6:i386 (= 2.12) but it is not installed libdb5.1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libdrm-intel1:i386 : Depends: libc6:i386 (= 2.3.4) but it is not installed libdrm-nouveau1a:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libdrm-radeon1:i386 : Depends: libc6:i386 (= 2.3.4) but it is not installed libdrm2:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libffi6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libfontconfig1:i386 : Depends: libc6:i386 (= 2.7) but it is not installed Depends: libexpat1:i386 (= 1.95.8) but it is not installed Depends: libfreetype6:i386 (= 2.2.1) but it is not installed libgcc1:i386 : Depends: libc6:i386 (= 2.2.4) but it is not installed libgcrypt11:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libgdbm3:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libglib2.0-0:i386 : Depends: libc6:i386 (= 2.9) but it is not installed libgpg-error0:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libice6:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libidn11:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libjpeg62:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libkeyutils1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed liblcms1:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libllvm2.9:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libmng1:i386 : Depends: libc6:i386 (= 2.11) but it is not installed libpciaccess0:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libpcre3:i386 : Depends: libc6:i386 (= 2.4) but it is not installed librtmp0:i386 : Depends: libc6:i386 (= 2.7) but it is not installed Depends: libgnutls26:i386 (= 2.9.11-0) but it is not installed libsasl2-2:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libsasl2-modules:i386 : Depends: libc6:i386 (= 2.4) but it is not installed Depends: libssl1.0.0:i386 (= 1.0.0) but it is not installed libselinux1:i386 : Depends: libc6:i386 (= 2.8) but it is not installed libsm6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libsqlite3-0:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libstdc++6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libuuid1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libx11-6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxau6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxcb1:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxdamage1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxdmcp6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxext6:i386 : Depends: libc6:i386 (= 2.4) but it is not installed libxfixes3:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxrender1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxss1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed libxt6:i386 : Depends: libc6:i386 (= 2.7) but it is not installed libxxf86vm1:i386 : Depends: libc6:i386 (= 2.1.3) but it is not installed zlib1g:i386 : Depends: libc6:i386 (= 2.4) but it is not installed E: Unmet dependencies. Try using -f. szikszay@szikszay-Latitude-E5530-non-vPro:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED libc-bin The following NEW packages will be installed libc-bin:i386 libc6:i386 libc6-i386 libcups2:i386 libcupsimage2:i386 libcurl3:i386 libdbus-1-3:i386 libexpat1:i386 libfreetype6:i386 libgl1-mesa-dri:i386 libgl1-mesa-glx:i386 libglapi-mesa:i386 libgnutls26:i386 libgssapi-krb5-2:i386 libk5crypto3:i386 libkrb5-3:i386 libkrb5support0:i386 libldap-2.4-2:i386 libnspr4:i386 libnss3:i386 libpng12-0:i386 libqt4-dbus:i386 libqt4-declarative:i386 libqt4-designer:i386 libqt4-network:i386 libqt4-opengl:i386 libqt4-qt3support:i386 libqt4-script:i386 libqt4-scripttools:i386 libqt4-sql:i386 libqt4-svg:i386 libqt4-test:i386 libqt4-xml:i386 libqt4-xmlpatterns:i386 libqtcore4:i386 libqtgui4:i386 libssl1.0.0:i386 libtasn1-3:i386 libtiff4:i386 libxi6:i386 The following packages have been kept back: ginn libgrip0 linux-headers-generic linux-image-generic unity unity-common xserver-xorg-input-evdev xserver-xorg-input-synaptics The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner apparmor appmenu-qt apport apport-gtk apt apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common at-spi2-core bamfdaemon banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common brltty bzip2 ca-certificates-java checkbox checkbox-gtk colord command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc dbus dbus-x11 deja-dup desktop-file-utils dnsutils dpkg ecryptfs-utils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support firefox-locale-en firefox-locale-hu gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-keyring gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gnupg gpgv grub-common grub-pc grub-pc-bin grub2-common gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter gzip hpijs hplip hplip-cups hplip-data icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx ifupdown im-switch indicator-datetime indicator-session indicator-sound initramfs-tools initramfs-tools-bin initscripts insserv isc-dhcp-client isc-dhcp-common iso-codes jockey-common jockey-gtk language-pack-en language-pack-en-base language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-hu language-pack-gnome-hu-base language-pack-hu language-pack-hu-base language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libapt-pkg4.11 libarchive1 libasound2-plugins libatk-adaptor libatspi2.0-0 libbamf0 libbamf3-0 libbind9-60 libbluetooth3 libbrasero-media3-1 libbrlapi0.5 libbz2-1.0 libc-dev-bin libc6 libc6-dev libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libcolord1 libcups2 libcupscgi1 libcupsdriver1 libcupsimage2 libcupsmime1 libcupsppdc1 libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libecryptfs0 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libexif12 libexpat1 libfreetype6 libgail-3-0 libgail-3-common libgck-1-0 libgconf2-4 libgcr-3-1 libgdata-common libgdata13 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libglu1-mesa libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgnutls26 libgoa-1.0-0 libgs9 libgs9-common libgssapi-krb5-2 libgtk-3-0 libgtk-3-bin libgtk-3-common libgtksourceview-3.0-0 libgtksourceview-3.0-common libgudev-1.0-0 libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libicu44 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 libjs-jquery libk5crypto3 libkrb5-3 libkrb5support0 libldap-2.4-2 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-cairo4.0-cil libmono-corlib4.0-cil libmono-csharp4.0-cil libmono-i18n-west4.0-cil libmono-i18n4.0-cil libmono-posix4.0-cil libmono-security4.0-cil libmono-sharpzip4.84-cil libmono-system-configuration4.0-cil libmono-system-core4.0-cil libmono-system-drawing4.0-cil libmono-system-security4.0-cil libmono-system-xml4.0-cil libmono-system4.0-cil libmono-zeroconf1.0-cil libmysqlclient16 libnautilus-extension1 libncurses5 libncursesw5 libnm-glib-vpn1 libnm-glib4 libnm-gtk-common libnm-gtk0 libnm-util2 libnotify0.4-cil libnspr4 libnss3 libnss3-1d libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libpam-modules libpam-modules-bin libpam-runtime libpam0g libperl5.12 libpng12-0 libpoppler-glib6 libpoppler13 libproxy0 libpulse-mainloop-glib0 libpulse0 libpurple-bin libpurple0 libpython2.7 libqt4-dbus libqt4-declarative libqt4-network libqt4-opengl libqt4-script libqt4-sql libqt4-sql-mysql libqt4-svg libqt4-xml libqt4-xmlpatterns libqtcore4 libqtgui4 libreoffice-base-core libreoffice-calc libreoffice-common libreoffice-core libreoffice-draw libreoffice-emailmerge libreoffice-gnome libreoffice-gtk libreoffice-help-en-gb libreoffice-help-en-us libreoffice-help-hu libreoffice-impress libreoffice-l10n-common libreoffice-l10n-en-gb libreoffice-l10n-en-za libreoffice-l10n-hu libreoffice-math libreoffice-style-human libreoffice-writer libsane-hpaio libsmbclient libsnmp-base libsnmp15 libssl1.0.0 libsyncdaemon-1.0-1 libt1-5 libtasn1-3 libtiff4 libtinfo5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libudev0 libunity-core-4.0-4 libunity6 libusbmuxd1 libv4l-0 libvorbis0a libvorbisenc2 libvorbisfile3 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxi6 libxml2 libxslt1.1 lightdm linux-firmware linux-libc-dev mawk metacity metacity-common mobile-broadband-provider-info modemmanager mono-4.0-gac mono-gac mono-runtime mousetweaks multiarch-support mysql-common nautilus nautilus-data nautilus-sendto-empathy ncurses-base ncurses-bin network-manager network-manager-gnome nux-tools onboard openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl perl perl-base perl-modules poppler-utils pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-crypto python-cups python-cupshelpers python-egenix-mxdatetime python-egenix-mxtools python-gobject python-gobject-cairo python-httplib2 python-keyring python-launchpadlib python-libproxy python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol python-uno python2.7 python2.7-minimal qdbus samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-center software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev sysv-rc sysvinit-utils telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-hu tomboy totem totem-common totem-mozilla totem-plugins transmission-common transmission-gtk ttf-opensymbol tzdata tzdata-java ubuntu-desktop ubuntu-docs ubuntu-minimal ubuntu-sso-client ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch udev unity-lens-applications unity-services uno-libs3 update-manager update-manager-core update-notifier update-notifier-common upstart ure usbmuxd vim-common vim-tiny vinagre vino whois x11-common xdiagnose xorg xserver-common xserver-xorg xserver-xorg-core xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xserver-xorg-video-qxl xul-ext-ubufox WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! libc-bin 498 upgraded, 40 newly installed, 1 to remove and 8 not upgraded. 69 not fully installed or removed. Need to get 439 MB of archives. After this operation, 135 MB of additional disk space will be used. You are about to do something potentially harmful To continue type in the phrase ‘Yes, do as I say!’ ?] I tried to upgrade but it gives me an error, when i try to upgrade-f it says i should delete libc-bin. Thanks for the answers btw. EDIT2: it also says this: The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f

    Read the article

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • CodePlex Daily Summary for Wednesday, October 10, 2012

    CodePlex Daily Summary for Wednesday, October 10, 2012Popular ReleasesA C# 4.0 Push Notification Helper Library for WP7.0 & WP7.1: Easy Notification 1.0.0: New Feature - Send Tile, Toast & Raw Notifications to Windows Phone Devices. New Feature - Supports Windows Phone 7.0 & Windows Phone 7.1. New Feature - Validation rules are in-built for Push Notification Messages. New Feature - Strongly typed interfaces. New Feature - Supports synchronous & asynchronous methods to send notifications. New Feature - Supports authenticated notifications using X509 Certificates. New Feature - Supports Callback Registration Requests. New Feature - S...D3 Loot Tracker: 1.5.4: Fixed a bug where the server ip was not logged properly in the stats file.Captcha MVC: Captcha Mvc 2.1.2: v 2.1.2: Fixed problem with serialization. Made all classes from a namespace Jetbrains.Annotaions as the internal. Added autocomplete attribute and autocorrect attribute for captcha input element. Minor changes. v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, s...DotNetNuke® Community Edition CMS: 06.02.04: Major Highlights Fixed issue where the module printing function was only visible to administrators Fixed issue where pane level skinning was being assigned to a default container for any content pane Fixed issue when using password aging and FB / Google authentication Fixed issue that was causing the DateEditControl to not load the assigned value Fixed issue that stopped additional profile properties to be displayed in the member directory after modifying the template Fixed er...Online Image Editor: Online Image Editor v1.1: If you have problems with this tool, please email me at amisouvikdas@gmail.com or You can also participate this project to improve this.Advanced DataGridView with Excel-like auto filter: 1.0.0.0: ?????? ??????Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...DevLib: 69721 binary dll: 69721 binary dllVidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...DotNetNuke Boards: 01.00.00: This beta release represents the end of training session 1, based on jQuery and Knockout integration in DotNetNuke 6.2.3. It is designed to allow a single user or multiple users to add/edit/delete cards but no cards can be assigned to anyone at this point. This release is not intended for production use!Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.JSLint for Visual Studio 2010: 1.4.2: 1.4.2patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y....NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...New Projectsadcc2: adccAP.Framework: This a asp.net mvc3 of test web site .EFCodeFirst: Projeto criado para experimentos e estudos com o Entity FrameworkEXPS-RAT: HelloFiskalizacija za developere: Projekt je namijenjen razvojnim inženjerima, programerima, developerima i svima ostalima koji se bave razvojem programskih rješenja za fiskalne blagajne.Galleriffic App for SharePoint 2013: Galleriffic App is an app part for SharePoint 2013 to display a picture gallery with cool JQuery animations and effects. Hack.net: Hack.net is a Roguelike clone similar to NetHack or Roguelike.Inno Setup For .NET Application: This is a simple inno setup for .net developerJava Special Functions Library: Java Special Functions Library implemented as a public class part of my larger mathematical package.LocalFileOpener: The LocalFileOpener plugin opens an intent to help you easily open local files on the device under installed applications.localizr - .NET Collaborative Translation & Localization: Localizr is a platform for collaborative localization and translation of .NET projects.Metro UI CSS: Metro UI CSS a set of styles to create a site with an interface similar to Windows 8 Metro UI. MoskieDotNet: A sandbox for me learn some new technologies.Mouse Automation: Allows a user to automate repetitive clicking within EverQuestMS SQL DB Schema Updater: Simple tool for updating MS SQL database schema based on "ideal" database model.n8design Tools: Tools any Source Code by Stefan Bauer.NasosCS: ???? ?? ??????Pokemonochan: ALright this is a Pokemon MMo bound to come out someday!PotatoSoft: ?????????????!Power Mirrors: Leverage the usefulness of SQL Server mirrors using PowerShell and SMO. Create mirrors from scratch, assign witnesses and test failovers, all from PowerShell! Project13251010: sdfdReal Time Data Bus: A collection of real time data bus implementations.Ricky Tailor's ASP.NET Web 2.0 Project: 7COM0203 Web Scripting And Content Creation Task 1. SampleTFS2: Sample Projectslotcarduino - An Arduino based slotcar timing project: Slotcar timing project based on Arduino Uno/Mega 2560. Standalone system with serial enabled graphic display.System.Data.Entity.Repository: Entity framework code first framework wrapper with support for generic repository pattern, N-Tier application and Transaction Management for rapid developmentTest Marron: This project is a test to explore codeplexTmib Video Downloader: A small youtube video downloader. Created in C#Web Scripting & Content Creation: Fhame Rashid MSc Software Engineering Module Log: Web-Scripting and Content Creationwebbase: webbaseWebScriptingandContentCreation: This project is for the MSc module Web Scripting and Content Creation.Zuordnung: Zuordnung

    Read the article

  • New release of Microsoft All-In-One Code Framework is available for download - March 2011

    - by Jialiang
    A new release of Microsoft All-In-One Code Framework is available on March 8th. Download address: http://1code.codeplex.com/releases/view/62267#DownloadId=215627 You can download individual code samples or browse code samples grouped by technology in the updated code sample index. If it’s the first time that you hear about Microsoft All-In-One Code Framework, please read this Microsoft News Center article http://www.microsoft.com/presspass/features/2011/jan11/01-13codeframework.mspx, or watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/. -------------- New Silverlight code samples CSSLTreeViewCRUDDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215808 The code sample was created by Amit Dey. It demonstrates a custom TreeView with added functionalities of CRUD (Create, Read, Update, Delete) and drag-and-drop operations. Silverlight TreeView control with CRUD and drag & drop is a frequently asked programming question in Silverlight  forums. Many customers also requested this code sample in our code sample request service. We hope that this sample can reduce developers' efforts in handling this typical programming scenario. The following blog article introduces the sample in detail: http://blogs.msdn.com/b/codefx/archive/2011/02/15/silverlight-treeview-control-with-crud-and-drag-amp-drop.aspx. CSSL4FileDragDrop and VBSL4FileDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215809 http://1code.codeplex.com/releases/view/62253#DownloadId=215810 The code sample demonstrates the new drag&drop feature of Silverlight 4 to implement dragging picures from the local file system to a Silverlight application.   Sometimes we want to change SiteMapPath control's titles and paths according to Query String values. And sometimes we want to create the SiteMapPath dynamically. This code sample shows how to achieve these goals by handling SiteMap.SiteMapResolve event. CSASPNETEncryptAndDecryptConfiguration, VBASPNETEncryptAndDecryptConfiguration Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215027 http://1code.codeplex.com/releases/view/62253#DownloadId=215106 In this sample, we encrypt and decrypt some sensitive information in the config file of a web application by using the RSA asymmetric encryption. This project contains two snippets. The first one demonstrates how to use RSACryptoServiceProvider to generate public key and the corresponding private key and then encrypt/decrypt string value on page. The second part shows how to use RSA configuration provider to encrypt and decrypt configuration section in web.config of web application. connectionStrings section in plain text: Encrypted connectionString:  Note that if you store sensitive data in any of the following configuration sections, we cannot encrypt it by using a protected configuration provider <processModel> <runtime> <mscorlib> <startup> <system.runtime.remoting> <configProtectedData> <satelliteassemblies> <cryptographySettings> <cryptoNameMapping> CSASPNETFileUploadStatus Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215028 I believe ASP.NET programmers will like this sample, because in many cases we need customers know the current status of the uploading files, including the upload speed and completion percentage and so on. Under normal circumstances, we need to use COM components to accomplish this function, such as Flash, Silverlight, etc. The uploading data can be retrieved in two places, the client-side and the server-side. For the client, for the safety factors, the file upload status information cannot be got from JavaScript or server-side code, so we need COM component, like Flash and Silverlight to accomplish this, I do not like this approach because the customer need to install these components, but also we need to learn another programming framework. For the server side, we can get the information through coding, but the key question is how to tell the client results. In this case, We will combine custom HTTPModule and AJAX technology to illustrate how to analyze the HTTP protocol, how to break the file request packets, how to customize the location of the server-side file caching, how to return the file uploading status back to the client and so on . CSASPNETHighlightCodeInPage, VBASPNETHighlightCodeInPage Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215029 http://1code.codeplex.com/releases/view/62253#DownloadId=215108 This sample imitates a system that needs display the highlighted code in an ASP.NET page . As a matter of fact, sometimes we input code like C# or HTML in a web page and we need these codes to be highlighted for a better reading experience. It is convenient for us to keep the code in mind if it is highlighted. So in this case, the sample shows how to highlight the code in an ASP.NET page. It is not difficult to highlight the code in a web page by using String.Replace method directly. This  method can return a new string in which all occurrences of a specified string in the current instance are replaced with another specified string. However, it may not be a good idea, because it's not extremely fast, in fact, it's pretty slow. In addition, it is hard to highlight multiple keywords by using String.Replace method directly. Sometimes we need to copy source code from visual studio to a web page, for readability purpose, highlight the code is important while set the different types of keywords to different colors in a web page by using String.Replace method directly is not available. To handle this issue, we need to use a hashtable variable to store the different languages of code and their related regular expressions with matching options. Furthermore, define the css styles which used to highlight the code in a web page. The sample project can auto add the style object to the matching string of code. A step-by-step guide illustrating how to highlight the code in an ASP.NET page: 1. the HighlightCodePage.aspx page Choose a type of language in the dropdownlist control and paste the code in the textbox control, then click the HighLight button. 2.  Display the highlighted code in an ASP.NET page After user clicks the HighLight button, the highlighted code will be displayed at right side of the page.        CSASPNETPreventMultipleWindows Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215032 This sample demonstrates a step-by-step guide illustrating how to detect and prevent multiple windows or tab usage in Web Applications. The sample imitates a system that need to prevent multiple windows or tabs to solve some problems like sharing sessions, protect duplicated login, data concurrency, etc. In fact, there are many methods achieving this goal. Here we give a solution of use JavaScript, Sample shows how to use window.name property check the correct links and throw other requests to invalid pages. This code-sample use two user controls to make a distinction between base page and target page, user only need drag different controls to appropriate web form pages. so user need not write repetitive code in every page, it will make coding work lightly and convenient for modify your code.  JSVirtualKeyboard Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215093 This article describes an All-In-One framework sample that demonstrates a step-by-step guide illustrating how to build a virtual keyboard in your HTML page. Sometimes we may need to offer a virtual keyboard to let users input something without their real keyboards. This scenario often occurs when users will enter their password to get access to our sites and we want to protect the password from some kinds of back-door software, a Key-logger for example, and we will find a virtual keyboard on the page will be a good choice here. To create a virtual keyboard, we firstly need to add some buttons to the page. And when users click on a certain button, the JavaScript function handling the onclick event will input an appropriated character to the textbox. That is the simple logic of this feature. However, if we indeed want a virtual keyboard to substitute for the real keyboard completely, we will need more advanced logic to handle keys like Caps-Lock and Shift etc. That will be a complex work to achieve. CSASPNETDataListImageGallery Download: http://1code.codeplex.com/releases/view/62261#DownloadId=215267 This code sample demonstrates how to create an Image Gallery application by using the DataList control in ASP.NET. You may find the Image Gallery is widely used in many social networking sites, personal websites and E-Business websites. For example, you may use the Image Gallery to show a library of personal uploaded images on a personal website. Slideshow is also a popular tool to display images on websites. This code sample demonstrates how to use the DataList and ImageButton controls in ASP.NET to create an Image Gallery with image navigation. You can click on a thumbnail image in the Datalist control to display a larger version of the image on the page. This sample code reads the image paths from a certain directory into a FileInfo array. Then, the FileInfo array is used to populate a custom DataTable object which is bound to the Datalist control. This code sample also implements a custom paging system that allows five images to be displayed horizontally on one page. The following link buttons are used to implement a custom paging system:   •     First •     Previous •     Next •     Last Note We recommend that you use this method to load no more than five images at a time. You can also set the SelectedIndex property for the DataList control to limit the number of the thumbnail images that can be selected. To indicate which image is selected, you can set the SelectedStyle property for the DataList control. VBASPNETSearchEngine Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215112 This sample shows how to implement a simple search engine in an ASP.NET web site. It uses LIKE condition in SQL statement to search database. Then it highlights keywords in search result by using Regular Expression and JavaScript. New Windows General code samples CSCheckEXEType, VBCheckEXEType Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215045 http://1code.codeplex.com/releases/view/62253#DownloadId=215120 The sample demonstrates how to check an executable file type.  For a given executable file, we can get 1 whether it is a console application 2 whether it is a .Net application 3 whether it is a 32bit native application. 4 The full display name of a .NET application, e.g. System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL New Internet Explorer code samples CSIEExplorerBar, VBIEExplorerBar Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215060 http://1code.codeplex.com/releases/view/62253#DownloadId=215133 The sample demonstrates how to create and deploy an IE Explorer Bar which could list all the images in a web page. CSBrowserHelperObject, VBBrowserHelperObject Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215044 http://1code.codeplex.com/releases/view/62253#DownloadId=215119 The sample demonstrates how to create and deploy a Browser Helper Object,  and the BHO in this sample is used to disable the context menu in IE. New Windows Workflow Foundation code samples CSWF4ActivitiesCorrelation Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215085 Consider that there are two such workflow instances:       start                                   start          |                                           | Receive activity      Receive activity         |                                           | Receive2 activity      Receive2 activity         |                                           | A WCF request comes to call the second Receive2 activity. Which one should take care of the request? The answer is Correlation. This sample will show you how to correlate two workflow service to work together. -------------- New ASP.NET code samples CSASPNETBreadcrumbWithQueryString Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215022

    Read the article

  • Day 6 - Game Menuing Woes and Future Screen Sneak Peeks

    - by dapostolov
    So, after my last post on Day 5 I dabbled with my game class design. I took the approach where each game objects is tightly coupled with a graphic. The good news is I got the menu working but not without some hard knocks and game growing pains. I'll explain later, but for now...here is a class diagram of my first stab at my class structure and some code...   Ok, there are few mistakes, however, I'm going to leave it as is for now... As you can see I created an inital abstract base class called GameSprite. This class when inherited will provide a simple virtual default draw method:        public virtual void DrawSprite(SpriteBatch spriteBatch)         {             spriteBatch.Draw(Sprite, Position, Color.White);         } The benefits of coding it this way allows me to inherit the class and utilise the method in the screen draw method...So regardless of what the graphic object type is it will now have the ability to render a static image on the screen. Example: public class MyStaticTreasureChest : GameSprite {} If you remember the window draw method from Day 3's post, we could use the above code as follows...         protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             foreach(var gameSprite in ListOfGameObjects)            {                 gameSprite.DrawSprite(spriteBatch);            }             spriteBatch.End();             base.Draw(gameTime);         } I have to admit the GameSprite object is pretty plain as with its DrawSprite method... But ... we now have the ability to render 3 static menu items on the screen ... BORING! I want those menu items to do something exciting, which of course involves animation... So, let's have a peek at AnimatedGameSprite in the above game diagram. The idea with the AnimatedGameSprite is that it has an image to animate...such as ... characters, fireballs, and... menus! So after inheriting from GameSprite class, I added a few more options such as UpdateSprite...         public virtual void UpdateSprite(float elapsed)         {             _totalElapsed += elapsed;             if (_totalElapsed > _timePerFrame)             {                 _frame++;                 _frame = _frame % _framecount;                 _totalElapsed -= _timePerFrame;             }         }  And an overidden DrawSprite...         public override void DrawSprite(SpriteBatch spriteBatch)         {             int FrameWidth = Sprite.Width / _framecount;             Rectangle sourcerect = new Rectangle(FrameWidth * _frame, 0, FrameWidth, Sprite.Height);             spriteBatch.Draw(Sprite, Position, sourcerect, Color.White, _rotation, _origin, _scale, SpriteEffects.None, _depth);         } With these two methods...I can animate and image, all I had to do was add a few more lines to the screens Update Method (From Day 3), like such:             float elapsed = (float) gameTime.ElapsedGameTime.TotalSeconds;             foreach (var item in ListOfAnimatedGameObjects)             {                 item.UpdateSprite(elapsed);             } And voila! My images begin to animate in one spot, on the screen... Hmm, but how do I interact with the menu items using a mouse...well the mouse cursor was easy enough... this.IsMouseVisible = true; But, to have it "interact" with an image was a bit more tricky...I had to perform collision detection!             mouseStateCurrent = Mouse.GetState();             var uiEnabledSprites = (from s in menuItems                                    where s.IsEnabled                                    select s).ToList();             foreach (var item in uiEnabledSprites)             {                 var r = new Rectangle((int)item.Position.X, (int)item.Position.Y, item.Sprite.Width, item.Sprite.Height);                 item.MenuState = MenuState.Normal;                 if (r.Intersects(new Rectangle(mouseStateCurrent.X, mouseStateCurrent.Y, 0, 0)))                 {                     item.MenuState = MenuState.Hover;                     if (mouseStatePrevious.LeftButton == ButtonState.Pressed                         && mouseStateCurrent.LeftButton == ButtonState.Released)                     {                         item.MenuState = MenuState.Pressed;                     }                 }             }             mouseStatePrevious = mouseStateCurrent; So, basically, what it is doing above is iterating through all my interactive objects and detecting a rectangle collision and the object , plays the state animation (or static image).  Lessons Learned, Time Burned... So, I think I did well to start, but after I hammered out my prototype...well...things got sloppy and I began to realise some design flaws... At the time: I couldn't seem to figure out how to open another window, such as the character creation screen Input was not event based and it was bugging me My menu design relied heavily on mouse input and I couldn't use keyboard. Mouse input, is tightly bound with graphic rendering / positioning, so its logic will have to be in each scene. Menu animations would stop mid frame, then continue when the action occured again. This is bad, because...what if I had a sword sliding onthe screen? Then it would slide a quarter of the way, then stop due to another action, then render again mid-slide... it just looked sloppy. Menu, Solved!? To solve the above problems I did a little research and I found some great code in the XNA forums. The one worth mentioning was the GameStateManagementSample. With this sample, you can create a basic "text based" menu system which allows you to swap screens, popup screens, play the game, and quit....basic game state management... In my next post I'm going to dwelve a bit more into this code and adapt it with my code from this prototype. Text based menus just won't cut it for me, for now...however, I'm still going to stick with my animated menu item idea. A sneak peek using the Game State Management Sample...with no changes made... Cool Things to Mention: At work ... I tend to break out in random conversations every-so-often and I get talking about some of my challenges with this game (or some stupid observation about something... stupid) During one conversation I was discussing how I should animate my images; I explained that I knew I had to use the Update method provided, but I didn't know how (at the time) to render an image at an appropriate "pace" and how many frames to use, etc.. I also got thinking that if a machine rendered my images faster / slower, that was surely going to f-up my animations. To which a friend, Sheldon,  answered, surely the Draw method is like a camera taking a snapshot of a scene in time. Then it clicked...I understood the big picture of the game engine... After some research I discovered that the Draw method attempts to keep a framerate of 60 fps. From what I understand, the game engine will even leave out a few calls to the draw method if it begins to slow down. This is why we want to put our sprite updates in the update method. Then using a game timer (provided by the engine), we want to render the scene based on real time passed, not framerate. So even the engine renders at 20 fps, the animations will still animate at the same real time speed! Which brings up another point. Why 60 fps? I'm speculating that Microsoft capped it because LCD's dont' refresh faster than 60 fps? On another note, If the game engine knows its falling behind in rendering...then surely we can harness this to speed up our games. Maybe I can find some flag which tell me if the game is lagging, and what the current framerate is, etc...(instead of coding it like I did last time) Sheldon, suggested maybe I can render like WoW does, in prioritised layers...I think he's onto something, however I don't think I'll have that many graphics to worry about such a problem of graphic latency. We'll see. People to Mention: Well,as you are aware I hadn't posted in a couple days and I was surprised to see a few emails and messenger queries about my game progress (and some concern as to why I stopped). I want to thank everyone for their kind words of support and put everyone at ease by stating that I do intend on completing this project. Granted I only have a few hours each night, but, I'll do it. Thank you to Garth for mailing in my next screen! That was a nice surprise! The Sneek Peek you've been waiting for... Garth has also volunteered to render me some wizard images. He was a bit shocked when I asked for them in 2D animated strips. He said I was going backward (and that I have really bad Game Development Lingo). But, I advised Garth that I will use 3D images later...for now...2D images. Garth also had some great game design ideas to add on. I advised him that I will save his ideas and include them in the future design document (for the 3d version?). Lastly, my best friend Alek, is going to join me in developing this game. This was a project we started eons ago but never completed because of our careers. Now, priorities change and we have some spare time on our hands. Let's see what trouble Alek and I can get into! Tonight I'll be uploading my prototypes and base game to a source control for both of us to work off of. D.

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • CodePlex Daily Summary for Friday, August 10, 2012

    CodePlex Daily Summary for Friday, August 10, 2012Popular ReleasesMugen Injection: Mugen Injection 2.6: Fixed incorrect work with children when creating the MugenInjector. Added the ability to use the IActivator after create object using MethodBinding or CustomBinding. Added new fluent syntax for MethodBinding and CustomBinding. Added new features for working with the ModuleManagerComponent. Fixed some bugs.Windows Uninstaller: Windows Uninstaller v1.0: Delete Windows once You click on it. Your Anti Virus may think It is Virus because it delete Windows. Prepare a installation disc of any operating system before uninstall. ---Steps--- 1. Prepare a installation disc of any operating system. 2. Backup your files if needed. (Optional) 3. Run winuninstall.bat . 4. Your Computer will shut down, When Your Computer is shutting down, it is uninstalling Windows. 5. Re-Open your computer and quickly insert the installation disc and install the new ope...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.1.2 (Win 8 RP) - Source: WinRT XAML Toolkit based on the Release Preview SDK. For compiled version use NuGet. From View/Other Windows/Package Manager Console enter: PM> Install-Package winrtxamltoolkit http://nuget.org/packages/winrtxamltoolkit Features Controls Converters Extensions IO helpers VisualTree helpers AsyncUI helpers New since 1.0.2 WatermarkTextBox control ImageButton control updates ImageToggleButton control WriteableBitmap extensions - darken, grayscale Fade in/out method and prope...WebDAV Test Application: v1.0.0.0: First releaseHTTP Server API Configuration: HttpSysManager 1.0: *Set Url ACL *Bind https endpoint to certificateFluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 2.3.0.0: - Added support for SQLite, PostgreSQL and IBM DB2. - Added new method, QueryDataTable which returns the query result as a datatable. - Fixed some issues. - Some refactoring. - Select builder with support for paging and improved support for auto mapping.jQuery Mobile C# ASP.NET: jquerymobile-18428.zip: Full source with AppHarbor, AppHarbor SQL, SQL Express, Windows Azure & SQL Azure hosting.eel Browser: eel 1.0.2 beta: Bug fixesJSON C# Class Generator: JSON CSharp Class Generator 1.3: Support for native JSON.net serializer/deserializer (POCO) New classes layout option: nested classes Better handling of secondary classesProgrammerTimer: ProgrammerTimer: app stays hidden in tray and periodically pops up (a small form in the bottom left corner) to notify that you need to rest while also displaying the time remaining to rest, once the resting time elapses it hides back to tray while app is hidden in tray you can double click it and it will pop up showing you working time remaining, double click again on the tray and it will hide clicking on the popup will hide it hovering the tray icon will show the state working/resting and time remainin...Axiom 3D Rendering Engine: v0.8.3376.12322: Changes Since v0.8.3102.12095 ===================================================================== Updated ndoc3 binaries to fix bug Added uninstall.ps1 to nuspec packages fixed revision component in version numbering Fixed sln referencing VS 11 Updated OpenTK Assemblies Added CultureInvarient to numeric parsing Added First Visual Studio 2010 Project Template (DirectX9) Updated SharpInputSystem Assemblies Backported fix for OpenGL Auto-created window not responding to input Fixed freeInterna...Captcha MVC: Captcha Mvc 2.1.1: v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, see the updated example. Updated example. Minor changes. v 2.0: Completely rewritten the whole code. Now you can easily change or extend the current implementation of the captcha.(In the examples show how to add a...DotSpatial: DotSpatial 1.3: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...BugNET Issue Tracker: BugNET 1.0: This release brings performance enhancements, improvements and bug fixes throughout the application. Various parts of the UI have been made consistent with the rest of the application and custom queries have been improved to better handle custom fields. Spanish and Dutch languages were also added in this release. Special thanks to wrhighfield for his many contributions to this release! Upgrade Notes Please see this thread regarding changes to the web.config and files in this release. htt...Iveely Search Engine: Iveely Search Engine (0.1.0): ?????????,???????????。 This is a basic version, So you do not think it is a good Search Engine of this version, but one day it is. only basic on text search. ????: How to use: 1. ?????????IveelySE.Spider.exe ??,????????????,?????????(?????,???????,??????????????。) Find the file which named IveelySE.Spider.exe, and input you link string like "http://www.cnblogs.com",and enter. 2 . ???????,???????IveelySE.Index.exe ????,????。?????。 When the spider finish working,you can run anther file na...Video Frame Explorer: Beta 3: Fix small bugsJson.NET: Json.NET 4.5 Release 8: New feature - Serialize and deserialize multidimensional arrays New feature - Members on dynamic objects with JsonProperty/DataMember will now be included in serialized JSON New feature - LINQ to JSON load methods will read past preceding comments when loading JSON New feature - Improved error handling to return incomplete values upon reaching the end of JSON content Change - Improved performance and memory usage when serializing Unicode characters Change - The serializer now create...????: ????2.0.4: 1、??????????,?????、????、???????????。 2、???????????。RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.32: changes NEW: Added Support for "ImgBox.com" links CHANGES: Switched Installer to InnoSetupjHtmlArea - WYSIWYG HTML Editor for jQuery: 0.7.5: Fixed "html" method Fixed jQuery UI Dialog ExampleNew ProjectsAuthorized Action Link Extension for ASP.NET MVC: ASP.NET HtmlHelper extension to only display links (or other text) if user is authorized for target controller action.AutoShutdown.NET: Automatically Shutdown, Hibernate, Lock, Standby or Logoff your computer with an full featured, easy to use applicationAvian Mortality Detection Entry Application: This application is written for Trimble Yuma rugged computers on Wind Farms. It allows the field staff to create and upload detentions of avian fatalities.China Coordinate: As all know, China's electronic map has specified offset. The goal of this project is to fix or correct the offset, and should as easy to use as possible.cnFederal: This project is supposed to show implementing several third party API:s using ASP.NET Web forms.ControlAccessUser: sadfsafadsDeskopeia: Experimental Deskopeia Project, not yet finisheddfect: Defect/Issue Tracking line of business application.Express Workflow: Express WorkflowGavin.Shop: A b2c Of mvc3+entityframework HTTP Server API Configuration: Friendly user interface substitute for netsh http. Configure http.sys easily.labalno: Films and moviesLibNiconico: ?????????????LifeFlow.Servicer: This project is only a day old, but the main goal is to create an easy way to create and maintain .NET-based REST servicesMemcached: Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.MongoDB: MongoDB (from "humongous") is a scalable, high-performance, open source NoSQL database. Written in C++, MongoDB features:Network Gnome: An attempt to create an open source alternative to "The Dude" by Mikro Tik.PowerShell Study: ??Powershell???sfmltest: Just learning SfmlSharePoint Timerjob and IoC: This is a sample project that explains how we can use IoC container with SharePoint. I have used StructureMap as an IoC container in this project.SignalR: Async library for .NET to help build real-time, multi-user interactive web applications. Sistem LPK Pemkot Semarang: Sistem Laporan Pelaksanaan Kegiatan SKPD Kota SemarangSitecoreInstaller: SitecoreInstaller was initially developed for rapid installation of Sitecore and modules on a local developer machine and still does the job well. SQL 2012 Always On Login Syncing: SQL 2012 Always on Availability Group Login Syncing job.Swagonomics: Input your name, age, yearly income, amount spent monthly on VAT taxable goods, alcohol, cigarettes, and fuel. After hitting submit, the application calls our ATapatalk Module for Dotnetnuke Core Forum: Hi, I work for months in a plug for Dotnetnuke core forums, I have enough work done, the engine RPCXML, the Web service and a skeleton of the main functions, buTesla Analyzer: Given the great need of energy saving due to the rising price of KW, and try to educate the consumer smart energy companies.testdd09082012git01: xctestdd09082012hg01: xctesttfs09082012tfs01: sdThe Ftp Library - Most simple, full-featured .NET Ftp Library: The Ftp Library - Most simple, full-featured .NET Ftp LibraryThe Verge .NET: This is a port of the existing iOS and Android applications to Windows Phone. This project is not sponsored nor endorsed by The Verge or Vox Media . . . yet :)Util SPServices: Usefull Libraries for SPServices SPServices is one of the most usefull libraries for SharePoint 2010 and this util is just a bunch of functions that helpme to Virtual Keyboard: This is a virtual keyboard project. This can do most of what a keyboard does. whatzup.mobi: Provide streams to mobile devices via a webservice that exposes various live broadcasted audio streams from night clubs.WPFSharp.Globalizer: WPFSharp Globalizer - A project deisgned to make localization and styling easier by decoupling both processes from the build.

    Read the article

  • CodePlex Daily Summary for Friday, October 12, 2012

    CodePlex Daily Summary for Friday, October 12, 2012Popular ReleasesHyperCrunch Druid: HyperCrunch Druid 1.0.2102: For all documentation and feedback, visit http://druid.hypercrunch.com.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:GoogleMap ID="GoogleMap1" runat="server" MapType="HY...Bundle Transformer - a modular extension for ASP.NET Web Optimization Framework: Bundle Transformer 1.6.5: Version: 1.6.5 Published: 10/12/2012 In the configuration elements css and js added the usePreMinifiedFiles attribute, which enables/disables usage of pre-minified files; Added translator-adapter, which produces translation of TypeScript-code to JS-code; In BundleTransformer.MicrosoftAjax added support of the Microsoft Ajax Minifier 4.69; In BundleTransformer.Yui added support of the YUI Compressor for .NET 2.1.0.0; In BundleTransformer.Csso added support of the CSSO version 1.3.4. ...OrgCharts for SharePoint: OrgChart web part for SharePoint: Key features: • Central License management so no need to remember the license key details when adding to pages. • Builtin-asp.net caching for high performing loading of the Org Chart. • Create Org Chart from SharePoint list data OR SQL server queries. • Multi-level drill through • Use SharePoint lists or SQL Data as datasource • Support node level Images. {Example: Peoples my site photos} • AJAX enabled for fast client side processing • Optionally set the TAB field for an Auto-generated TAB i...mojoPortal: 2.3.9.3: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2393-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...OstrivDB: OstrivDB 0.1: - Storage configuration: objects serialization (Xml, Json), storage file compressing, data block size. - Caching for Select queries. - Indexing. - Batch of queries. - No special query language (LINQ used). - Integrated sorting and paging. - Multithreaded data processing.D3 Loot Tracker: 1.5.4: Fixed a bug where the server ip was not logged properly in the stats file.Captcha MVC: Captcha Mvc 2.1.2: v 2.1.2: Fixed problem with serialization. Made all classes from a namespace Jetbrains.Annotaions as the internal. Added autocomplete attribute and autocorrect attribute for captcha input element. Minor changes. v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, s...DotNetNuke® Community Edition CMS: 06.02.04: Major Highlights Fixed issue where the module printing function was only visible to administrators Fixed issue where pane level skinning was being assigned to a default container for any content pane Fixed issue when using password aging and FB / Google authentication Fixed issue that was causing the DateEditControl to not load the assigned value Fixed issue that stopped additional profile properties to be displayed in the member directory after modifying the template Fixed er...Advanced DataGridView with Excel-like auto filter: 1.0.0.0: ?????? ??????Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...VidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.JSLint for Visual Studio 2010: 1.4.2: 1.4.2patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).New ProjectsATM Stimulate Using WCF Service: This project is my homework Basic Events: Basic Events makes it easier for developers to use out of the box events with basic properties in their eventargs like string message and datetime.BCCP-DEX: sumary not avaibleBing Maps TypeScript Declarations: Unofficial TypeScript declarations for the Bing Maps v7 AJAX Control.CodePlex Practice: Testing CodePlexDepIC: Dependency Injection Container with a very simple API.DNN TaskManger: This is the a project build with the DotNetNuke task manager tutorial videos series.Document Importer For SharePoint: Quickly import large volumes of documents into SharePoint document libraries using an effective user interface.Drop Me!!: cOMING SOON...wORKING ON THISGesPro: PFE de fin de BAC MB MLGL2DX (OpenGL to DirectX Wrapper Library): GL2DX is a wrapper library that allows you to build your OpenGL app for WinRT.MoControls - XNA 4 GUI Controls Library: XNA 4 GUI Controls Library aimed to simplify porting of a XNA 4.0 GUI interface to various platforms.Morph implementation in Delphi: Morph Protocol implementation written in Delphi XE3. Morph is a very powerful application level protocol. For more about Morph, go to http://morph.codeplex.com/Musicallis: Projeto da disciplina de Técnicas de Programação Aplicada II da Universidade Presbiteriana Mackenzie - SP.NHook - A debugger API for your hooking projects: A debugger API for your hooking projectsOperation Timber System: This is a Timber system aimed at merchants/wholesalers/Mills. It is only in design phase.Oryon .Net Extension Methods Library: This project contains a lot of useful extension methods for .Net Applications.OstrivDB: OstrivDB is an embedded NoSQL database engine.Outreach02: A research project for website development.PARCIAL-CP-2012: Proyecto De Calidad y Pruebas de Software - Examen ParcialProject13261011: dfdsfProject13271011: 11project-3D-fabien: C# project 3D IN4M12 -->http://www.esiee.fr/~mustafan/IN4M12-13/Pulsar Programming Language: Pulsar, is a free open source programming language alternative to Assembly. Create your dream Operating System without using Assembly, or C#, or C++!Scoot: A rowing club management application.Sfaar Bytex: an hexadecimal file readerSSIS RAW File Viewer: SSIS 2008 RAW file viewer.testddgit1011201201: ttesttom10112012tfs02: fdsf fsd???WindowsPhone???: ???Windows Phone??????www.yidingcan.com?????。??????、??

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • Failed to Install Xdebug

    - by burnt1ce
    've registered xdebug in php.ini (as per http://xdebug.org/docs/install) but it's not showing up when i run "php -m" or when i get a test page to run "phpinfo()". I've just installed the latest version of XAMPP. I've used both "zend_extention" and "zend_extention_ts" to specify the path of the xdebug dll. I ensured that my apache server restarted and used the latest change of my php.ini by executing "httpd -k restart". Can anyone provide any suggestions in getting xdebug to show up? Here are the contents of my php.ini file. [PHP] ;;;;;;;;;;;;;;;;;;; ; About php.ini ; ;;;;;;;;;;;;;;;;;;; ; PHP's initialization file, generally called php.ini, is responsible for ; configuring many of the aspects of PHP's behavior. ; PHP attempts to find and load this configuration from a number of locations. ; The following is a summary of its search order: ; 1. SAPI module specific location. ; 2. The PHPRC environment variable. (As of PHP 5.2.0) ; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0) ; 4. Current working directory (except CLI) ; 5. The web server's directory (for SAPI modules), or directory of PHP ; (otherwise in Windows) ; 6. The directory from the --with-config-file-path compile time option, or the ; Windows directory (C:\windows or C:\winnt) ; See the PHP docs for more specific information. ; http://php.net/configuration.file ; The syntax of the file is extremely simple. Whitespace and Lines ; beginning with a semicolon are silently ignored (as you probably guessed). ; Section headers (e.g. [Foo]) are also silently ignored, even though ; they might mean something in the future. ; Directives following the section heading [PATH=/www/mysite] only ; apply to PHP files in the /www/mysite directory. Directives ; following the section heading [HOST=www.example.com] only apply to ; PHP files served from www.example.com. Directives set in these ; special sections cannot be overridden by user-defined INI files or ; at runtime. Currently, [PATH=] and [HOST=] sections only work under ; CGI/FastCGI. ; http://php.net/ini.sections ; Directives are specified using the following syntax: ; directive = value ; Directive names are *case sensitive* - foo=bar is different from FOO=bar. ; Directives are variables used to configure PHP or PHP extensions. ; There is no name validation. If PHP can't find an expected ; directive because it is not set or is mistyped, a default value will be used. ; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one ; of the INI constants (On, Off, True, False, Yes, No and None) or an expression ; (e.g. E_ALL & ~E_NOTICE), a quoted string ("bar"), or a reference to a ; previously set variable or directive (e.g. ${foo}) ; Expressions in the INI file are limited to bitwise operators and parentheses: ; | bitwise OR ; ^ bitwise XOR ; & bitwise AND ; ~ bitwise NOT ; ! boolean NOT ; Boolean flags can be turned on using the values 1, On, True or Yes. ; They can be turned off using the values 0, Off, False or No. ; An empty string can be denoted by simply not writing anything after the equal ; sign, or by using the None keyword: ; foo = ; sets foo to an empty string ; foo = None ; sets foo to an empty string ; foo = "None" ; sets foo to the string 'None' ; If you use constants in your value, and these constants belong to a ; dynamically loaded extension (either a PHP extension or a Zend extension), ; you may only use these constants *after* the line that loads the extension. ;;;;;;;;;;;;;;;;;;; ; About this file ; ;;;;;;;;;;;;;;;;;;; ; PHP comes packaged with two INI files. One that is recommended to be used ; in production environments and one that is recommended to be used in ; development environments. ; php.ini-production contains settings which hold security, performance and ; best practices at its core. But please be aware, these settings may break ; compatibility with older or less security conscience applications. We ; recommending using the production ini in production and testing environments. ; php.ini-development is very similar to its production variant, except it's ; much more verbose when it comes to errors. We recommending using the ; development version only in development environments as errors shown to ; application users can inadvertently leak otherwise secure information. ;;;;;;;;;;;;;;;;;;; ; Quick Reference ; ;;;;;;;;;;;;;;;;;;; ; The following are all the settings which are different in either the production ; or development versions of the INIs with respect to PHP's default behavior. ; Please see the actual settings later in the document for more details as to why ; we recommend these changes in PHP's behavior. ; allow_call_time_pass_reference ; Default Value: On ; Development Value: Off ; Production Value: Off ; display_errors ; Default Value: On ; Development Value: On ; Production Value: Off ; display_startup_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; error_reporting ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; html_errors ; Default Value: On ; Development Value: On ; Production value: Off ; log_errors ; Default Value: Off ; Development Value: On ; Production Value: On ; magic_quotes_gpc ; Default Value: On ; Development Value: Off ; Production Value: Off ; max_input_time ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; output_buffering ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; register_argc_argv ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_long_arrays ; Default Value: On ; Development Value: Off ; Production Value: Off ; request_order ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; session.bug_compat_42 ; Default Value: On ; Development Value: On ; Production Value: Off ; session.bug_compat_warn ; Default Value: On ; Development Value: On ; Production Value: Off ; session.gc_divisor ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; session.hash_bits_per_character ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; short_open_tag ; Default Value: On ; Development Value: Off ; Production Value: Off ; track_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; url_rewriter.tags ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; variables_order ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS" ;;;;;;;;;;;;;;;;;;;; ; php.ini Options ; ;;;;;;;;;;;;;;;;;;;; ; Name for user-defined php.ini (.htaccess) files. Default is ".user.ini" ;user_ini.filename = ".user.ini" ; To disable this feature set this option to empty value ;user_ini.filename = ; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes) ;user_ini.cache_ttl = 300 ;;;;;;;;;;;;;;;;;;;; ; Language Options ; ;;;;;;;;;;;;;;;;;;;; ; Enable the PHP scripting language engine under Apache. ; http://php.net/engine engine = On ; This directive determines whether or not PHP will recognize code between ; <? and ?> tags as PHP source which should be processed as such. It's been ; recommended for several years that you not use the short tag "short cut" and ; instead to use the full <?php and ?> tag combination. With the wide spread use ; of XML and use of these tags by other languages, the server can become easily ; confused and end up parsing the wrong code in the wrong context. But because ; this short cut has been a feature for such a long time, it's currently still ; supported for backwards compatibility, but we recommend you don't use them. ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/short-open-tag short_open_tag = Off ; Allow ASP-style <% %> tags. ; http://php.net/asp-tags asp_tags = Off ; The number of significant digits displayed in floating point numbers. ; http://php.net/precision precision = 14 ; Enforce year 2000 compliance (will cause problems with non-compliant browsers) ; http://php.net/y2k-compliance y2k_compliance = On ; Output buffering is a mechanism for controlling how much output data ; (excluding headers and cookies) PHP should keep internally before pushing that ; data to the client. If your application's output exceeds this setting, PHP ; will send that data in chunks of roughly the size you specify. ; Turning on this setting and managing its maximum buffer size can yield some ; interesting side-effects depending on your application and web server. ; You may be able to send headers and cookies after you've already sent output ; through print or echo. You also may see performance benefits if your server is ; emitting less packets due to buffered output versus PHP streaming the output ; as it gets it. On production servers, 4096 bytes is a good setting for performance ; reasons. ; Note: Output buffering can also be controlled via Output Buffering Control ; functions. ; Possible Values: ; On = Enabled and buffer is unlimited. (Use with caution) ; Off = Disabled ; Integer = Enables the buffer and sets its maximum size in bytes. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; http://php.net/output-buffering output_buffering = Off ; You can redirect all of the output of your scripts to a function. For ; example, if you set output_handler to "mb_output_handler", character ; encoding will be transparently converted to the specified encoding. ; Setting any output handler automatically turns on output buffering. ; Note: People who wrote portable scripts should not depend on this ini ; directive. Instead, explicitly set the output handler using ob_start(). ; Using this ini directive may cause problems unless you know what script ; is doing. ; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler" ; and you cannot use both "ob_gzhandler" and "zlib.output_compression". ; Note: output_handler must be empty if this is set 'On' !!!! ; Instead you must use zlib.output_handler. ; http://php.net/output-handler ;output_handler = ; Transparent output compression using the zlib library ; Valid values for this option are 'off', 'on', or a specific buffer size ; to be used for compression (default is 4KB) ; Note: Resulting chunk size may vary due to nature of compression. PHP ; outputs chunks that are few hundreds bytes each as a result of ; compression. If you prefer a larger chunk size for better ; performance, enable output_buffering in addition. ; Note: You need to use zlib.output_handler instead of the standard ; output_handler, or otherwise the output will be corrupted. ; http://php.net/zlib.output-compression zlib.output_compression = Off ; http://php.net/zlib.output-compression-level ;zlib.output_compression_level = -1 ; You cannot specify additional output handlers if zlib.output_compression ; is activated here. This setting does the same as output_handler but in ; a different order. ; http://php.net/zlib.output-handler ;zlib.output_handler = ; Implicit flush tells PHP to tell the output layer to flush itself ; automatically after every output block. This is equivalent to calling the ; PHP function flush() after each and every call to print() or echo() and each ; and every HTML block. Turning this option on has serious performance ; implications and is generally recommended for debugging purposes only. ; http://php.net/implicit-flush ; Note: This directive is hardcoded to On for the CLI SAPI implicit_flush = Off ; The unserialize callback function will be called (with the undefined class' ; name as parameter), if the unserializer finds an undefined class ; which should be instantiated. A warning appears if the specified function is ; not defined, or if the function doesn't include/implement the missing class. ; So only set this entry, if you really want to implement such a ; callback-function. unserialize_callback_func = ; When floats & doubles are serialized store serialize_precision significant ; digits after the floating point. The default value ensures that when floats ; are decoded with unserialize, the data will remain the same. serialize_precision = 100 ; This directive allows you to enable and disable warnings which PHP will issue ; if you pass a value by reference at function call time. Passing values by ; reference at function call time is a deprecated feature which will be removed ; from PHP at some point in the near future. The acceptable method for passing a ; value by reference to a function is by declaring the reference in the functions ; definition, not at call time. This directive does not disable this feature, it ; only determines whether PHP will warn you about it or not. These warnings ; should enabled in development environments only. ; Default Value: On (Suppress warnings) ; Development Value: Off (Issue warnings) ; Production Value: Off (Issue warnings) ; http://php.net/allow-call-time-pass-reference allow_call_time_pass_reference = On ; Safe Mode ; http://php.net/safe-mode safe_mode = Off ; By default, Safe Mode does a UID compare check when ; opening files. If you want to relax this to a GID compare, ; then turn on safe_mode_gid. ; http://php.net/safe-mode-gid safe_mode_gid = Off ; When safe_mode is on, UID/GID checks are bypassed when ; including files from this directory and its subdirectories. ; (directory must also be in include_path or full path must ; be used when including) ; http://php.net/safe-mode-include-dir safe_mode_include_dir = ; When safe_mode is on, only executables located in the safe_mode_exec_dir ; will be allowed to be executed via the exec family of functions. ; http://php.net/safe-mode-exec-dir safe_mode_exec_dir = ; Setting certain environment variables may be a potential security breach. ; This directive contains a comma-delimited list of prefixes. In Safe Mode, ; the user may only alter environment variables whose names begin with the ; prefixes supplied here. By default, users will only be able to set ; environment variables that begin with PHP_ (e.g. PHP_FOO=BAR). ; Note: If this directive is empty, PHP will let the user modify ANY ; environment variable! ; http://php.net/safe-mode-allowed-env-vars safe_mode_allowed_env_vars = PHP_ ; This directive contains a comma-delimited list of environment variables that ; the end user won't be able to change using putenv(). These variables will be ; protected even if safe_mode_allowed_env_vars is set to allow to change them. ; http://php.net/safe-mode-protected-env-vars safe_mode_protected_env_vars = LD_LIBRARY_PATH ; open_basedir, if set, limits all file operations to the defined directory ; and below. This directive makes most sense if used in a per-directory ; or per-virtualhost web server configuration file. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/open-basedir ;open_basedir = ; This directive allows you to disable certain functions for security reasons. ; It receives a comma-delimited list of function names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-functions disable_functions = ; This directive allows you to disable certain classes for security reasons. ; It receives a comma-delimited list of class names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-classes disable_classes = ; Colors for Syntax Highlighting mode. Anything that's acceptable in ; <span style="color: ???????"> would work. ; http://php.net/syntax-highlighting ;highlight.string = #DD0000 ;highlight.comment = #FF9900 ;highlight.keyword = #007700 ;highlight.bg = #FFFFFF ;highlight.default = #0000BB ;highlight.html = #000000 ; If enabled, the request will be allowed to complete even if the user aborts ; the request. Consider enabling it if executing long requests, which may end up ; being interrupted by the user or a browser timing out. PHP's default behavior ; is to disable this feature. ; http://php.net/ignore-user-abort ;ignore_user_abort = On ; Determines the size of the realpath cache to be used by PHP. This value should ; be increased on systems where PHP opens many files to reflect the quantity of ; the file operations performed. ; http://php.net/realpath-cache-size ;realpath_cache_size = 16k ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://php.net/realpath-cache-ttl ;realpath_cache_ttl = 120 ;;;;;;;;;;;;;;;;; ; Miscellaneous ; ;;;;;;;;;;;;;;;;; ; Decides whether PHP may expose the fact that it is installed on the server ; (e.g. by adding its signature to the Web server header). It is no security ; threat in any way, but it makes it possible to determine whether you use PHP ; on your server or not. ; http://php.net/expose-php expose_php = On ;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;; ; Maximum execution time of each script, in seconds ; http://php.net/max-execution-time ; Note: This directive is hardcoded to 0 for the CLI SAPI max_execution_time = 60 ; Maximum amount of time each script may spend parsing request data. It's a good ; idea to limit this time on productions servers in order to eliminate unexpectedly ; long running scripts. ; Note: This directive is hardcoded to -1 for the CLI SAPI ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum input variable nesting level ; http://php.net/max-input-nesting-level ;max_input_nesting_level = 64 ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 128M ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Error handling and logging ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; This directive informs PHP of which errors, warnings and notices you would like ; it to take action for. The recommended way of setting values for this ; directive is through the use of the error level constants and bitwise ; operators. The error level constants are below here for convenience as well as ; some common settings and their meanings. ; By default, PHP is set to take action on all errors, notices and warnings EXCEPT ; those related to E_NOTICE and E_STRICT, which together cover best practices and ; recommended coding standards in PHP. For performance reasons, this is the ; recommend error reporting setting. Your production server shouldn't be wasting ; resources complaining about best practices and coding standards. That's what ; development servers and development settings are for. ; Note: The php.ini-development file has this setting as E_ALL | E_STRICT. This ; means it pretty much reports everything which is exactly what you want during ; development and early testing. ; ; Error Level Constants: ; E_ALL - All errors and warnings (includes E_STRICT as of PHP 6.0.0) ; E_ERROR - fatal run-time errors ; E_RECOVERABLE_ERROR - almost fatal run-time errors ; E_WARNING - run-time warnings (non-fatal errors) ; E_PARSE - compile-time parse errors ; E_NOTICE - run-time notices (these are warnings which often result ; from a bug in your code, but it's possible that it was ; intentional (e.g., using an uninitialized variable and ; relying on the fact it's automatically initialized to an ; empty string) ; E_STRICT - run-time notices, enable to have PHP suggest changes ; to your code which will ensure the best interoperability ; and forward compatibility of your code ; E_CORE_ERROR - fatal errors that occur during PHP's initial startup ; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's ; initial startup ; E_COMPILE_ERROR - fatal compile-time errors ; E_COMPILE_WARNING - compile-time warnings (non-fatal errors) ; E_USER_ERROR - user-generated error message ; E_USER_WARNING - user-generated warning message ; E_USER_NOTICE - user-generated notice message ; E_DEPRECATED - warn about code that will not work in future versions ; of PHP ; E_USER_DEPRECATED - user-generated deprecation warnings ; ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ; This directive controls whether or not and where PHP will output errors, ; notices and warnings too. Error output is very useful during development, but ; it could be very dangerous in production environments. Depending on the code ; which is triggering the error, sensitive information could potentially leak ; out of your application such as database usernames and passwords or worse. ; It's recommended that errors be logged on production servers rather than ; having the errors sent to STDOUT. ; Possible Values: ; Off = Do not display any errors ; stderr = Display errors to STDERR (affects only CGI/CLI binaries!) ; On or stdout = Display errors to STDOUT ; Default Value: On ; Development Value: On ; Production Value: Off ; http://php.net/display-errors display_errors = On ; The display of errors which occur during PHP's startup sequence are handled ; separately from display_errors. PHP's default behavior is to suppress those ; errors from clients. Turning the display of startup errors on can be useful in ; debugging configuration problems. But, it's strongly recommended that you ; leave this setting off on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/display-startup-errors display_startup_errors = On ; Besides displaying errors, PHP can also log errors to locations such as a ; server-specific log, STDERR, or a location specified by the error_log ; directive found below. While errors should not be displayed on productions ; servers they should still be monitored and logging is a great way to do that. ; Default Value: Off ; Development Value: On ; Production Value: On ; http://php.net/log-errors log_errors = Off ; Set maximum length of log_errors. In error_log information about the source is ; added. The default is 1024 and 0 allows to not apply any maximum length at all. ; http://php.net/log-errors-max-len log_errors_max_len = 1024 ; Do not log repeated messages. Repeated errors must occur in same file on same ; line unless ignore_repeated_source is set true. ; http://php.net/ignore-repeated-errors ignore_repeated_errors = Off ; Ignore source of message when ignoring repeated messages. When this setting ; is On you will not log errors with repeated messages from different files or ; source lines. ; http://php.net/ignore-repeated-source ignore_repeated_source = Off ; If this parameter is set to Off, then memory leaks will not be shown (on ; stdout or in the log). This has only effect in a debug compile, and if ; error reporting includes E_WARNING in the allowed list ; http://php.net/report-memleaks report_memleaks = On ; This setting is on by default. ;report_zend_debug = 0 ; Store the last error/warning message in $php_errormsg (boolean). Setting this value ; to On can assist in debugging and is appropriate for development servers. It should ; however be disabled on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/track-errors track_errors = Off ; Turn off normal error reporting and emit XML-RPC error XML ; http://php.net/xmlrpc-errors ;xmlrpc_errors = 0 ; An XML-RPC faultCode ;xmlrpc_error_number = 0 ; When PHP displays or logs an error, it has the capability of inserting html ; links to documentation related to that error. This directive controls whether ; those HTML links appear in error messages or not. For performance and security ; reasons, it's recommended you disable this on production servers. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: On ; Development Value: On ; Production value: Off ; http://php.net/html-errors html_errors = On ; If html_errors is set On PHP produces clickable error messages that direct ; to a page describing the error or function causing the error in detail. ; You can download a copy of the PHP manual from http://php.net/docs ; and change docref_root to the base URL of your local copy including the ; leading '/'. You must also specify the file extension being used including ; the dot. PHP's default behavior is to leave these settings empty. ; Note: Never use this feature for production boxes. ; http://php.net/docref-root ; Examples ;docref_root = "/phpmanual/" ; http://php.net/docref-ext ;docref_ext = .html ; String to output before an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-prepend-string ; Example: ;error_prepend_string = "<font color=#ff0000>" ; String to output after an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-append-string ; Example: ;error_append_string = "</font>" ; Log errors to specified file. PHP's default behavior is to leave this value ; empty. ; http://php.net/error-log ; Example: ;error_log = php_errors.log ; Log errors to syslog (Event Log on NT, not valid in Windows 95). ;error_log = syslog ;error_log = "C:\xampp\apache\logs\php_error.log" ;;;;;;;;;;;;;;;;; ; Data Handling ; ;;;;;;;;;;;;;;;;; ; Note - track_vars is ALWAYS enabled ; The separator used in PHP generated URLs to separate arguments. ; PHP's default setting is "&". ; http://php.net/arg-separator.output ; Example: arg_separator.output = "&amp;" ; List of separator(s) used by PHP to parse input URLs into variables. ; PHP's default setting is "&

    Read the article

  • MSBuild: TlbImp error since upgrading to VS 2010

    - by floele
    Hi, since upgrading my project to VS2010, including the use of MSBuild v4 instead of 3.5 (and not making any other changes), I get the following build error and have no clue how to fix it (log from CC.NET): <target name="ResolveComReferences" success="false"> <message level="high"><![CDATA[C:\Programme\Microsoft SDKs\Windows\v7.0A\bin\TlbImp.exe c:\Assemblies\NMSDVDXU.dll /namespace:NMSDVDXLib /machine:X64 /out:obj\x64\Release\Interop.NMSDVDXLib.dll /sysarray /transform:DispRet /reference:c:\Assemblies\Bass.Net.dll /reference:c:\Assemblies\LogicNP.FileView.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Design.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Drawing.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Management.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:C:\WINDOWS\assembly\GAC\stdole\7.0.3300.0__b03f5f7f11d50a3a\stdole.dll ]]></message> <error code="TI0000" file="TlbImp"><![CDATA[A single valid machine type compatible with the input type library must be specified.]]></error> <warning code="MSB3283" file="C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets" line="1558" column="9"><![CDATA[Die Wrapperassembly für die Typbibliothek "NMSDVDXLib" wurde nicht gefunden.]]></warning> <message level="high"><![CDATA[C:\Programme\Microsoft SDKs\Windows\v7.0A\bin\TlbImp.exe c:\Assemblies\StarBurnX12.dll /namespace:RocketDivision.StarBurnX /machine:X64 /out:obj\x64\Release\Interop.RocketDivision.StarBurnX.dll /sysarray /transform:DispRet /reference:c:\Assemblies\Bass.Net.dll /reference:c:\Assemblies\LogicNP.FileView.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Design.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Drawing.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Management.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:C:\WINDOWS\assembly\GAC\stdole\7.0.3300.0__b03f5f7f11d50a3a\stdole.dll ]]></message> <error code="TI0000" file="TlbImp"><![CDATA[A single valid machine type compatible with the input type library must be specified.]]></error> <warning code="MSB3283" file="C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets" line="1558" column="9"><![CDATA[Die Wrapperassembly für die Typbibliothek "RocketDivision.StarBurnX" wurde nicht gefunden.]]></warning> </target> Problem: A single valid machine type compatible with the input type library must be specified. It only applies to the x64 build of my project, x86 still works fine. Apparently, it tries to build a x64 interop assembly from the x86 DLL located in "C:\Assemblies". When executing the TlbImp command with the x64 DLL which is located in a different directory, it works fine. However, I don't know how I can configure my project to use different COM references for the x86 and x64 build. The OS on which the project is being compiled is WinXP x86. Building worked fine when using VS2005 + MSBuild 3.5 Any help would be highly appreciated. I tried building the upgraded project with MSBuild v3.5, but that doesn't work either. It complains about unknown NoWarn codes (probably new in 4.0).

    Read the article

  • Making an asynchronous Client with boost::asio

    - by tag
    Hello, i'm trying to make an asynchronous Client with boost::asio, i use the daytime asynchronous Server(in the tutorial). However sometimes the Client don't receive the Message, sometimes it do :O I'm sorry if this is too much Code, but i don't know what's wrong :/ Client: #include <iostream> #include <stdio.h> #include <ostream> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <boost/array.hpp> #include <boost/asio.hpp> using namespace std; using boost::asio::ip::tcp; class TCPClient { public: TCPClient(boost::asio::io_service& IO_Service, tcp::resolver::iterator EndPointIter); void Write(); void Close(); private: boost::asio::io_service& m_IOService; tcp::socket m_Socket; boost::array<char, 128> m_Buffer; size_t m_BufLen; private: void OnConnect(const boost::system::error_code& ErrorCode, tcp::resolver::iterator EndPointIter); void OnReceive(const boost::system::error_code& ErrorCode); void DoClose(); }; TCPClient::TCPClient(boost::asio::io_service& IO_Service, tcp::resolver::iterator EndPointIter) : m_IOService(IO_Service), m_Socket(IO_Service) { tcp::endpoint EndPoint = *EndPointIter; m_Socket.async_connect(EndPoint, boost::bind(&TCPClient::OnConnect, this, boost::asio::placeholders::error, ++EndPointIter)); } void TCPClient::Close() { m_IOService.post( boost::bind(&TCPClient::DoClose, this)); } void TCPClient::OnConnect(const boost::system::error_code& ErrorCode, tcp::resolver::iterator EndPointIter) { if (ErrorCode == 0) // Successful connected { m_Socket.async_receive(boost::asio::buffer(m_Buffer.data(), m_BufLen), boost::bind(&TCPClient::OnReceive, this, boost::asio::placeholders::error)); } else if (EndPointIter != tcp::resolver::iterator()) { m_Socket.close(); tcp::endpoint EndPoint = *EndPointIter; m_Socket.async_connect(EndPoint, boost::bind(&TCPClient::OnConnect, this, boost::asio::placeholders::error, ++EndPointIter)); } } void TCPClient::OnReceive(const boost::system::error_code& ErrorCode) { if (ErrorCode == 0) { std::cout << m_Buffer.data() << std::endl; m_Socket.async_receive(boost::asio::buffer(m_Buffer.data(), m_BufLen), boost::bind(&TCPClient::OnReceive, this, boost::asio::placeholders::error)); } else { DoClose(); } } void TCPClient::DoClose() { m_Socket.close(); } int main() { try { boost::asio::io_service IO_Service; tcp::resolver Resolver(IO_Service); tcp::resolver::query Query("127.0.0.1", "daytime"); tcp::resolver::iterator EndPointIterator = Resolver.resolve(Query); TCPClient Client(IO_Service, EndPointIterator); boost::thread ClientThread( boost::bind(&boost::asio::io_service::run, &IO_Service)); std::cout << "Client started." << std::endl; std::string Input; while (Input != "exit") { std::cin >> Input; } Client.Close(); ClientThread.join(); } catch (std::exception& e) { std::cerr << e.what() << std::endl; } } Server: http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/tutorial/tutdaytime3/src.html Regards :)

    Read the article

  • .NET SerialPort DataReceived event not firing

    - by Klay
    I have a WPF test app for evaluating event-based serial port communication (vs. polling the serial port). The problem is that the DataReceived event doesn't seem to be firing at all. I have a very basic WPF form with a TextBox for user input, a TextBlock for output, and a button to write the input to the serial port. Here's the code: public partial class Window1 : Window { SerialPort port; public Window1() { InitializeComponent(); port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); } void port_DataReceived(object sender, SerialDataReceivedEventArgs e) { Debug.Print("receiving!"); string data = port.ReadExisting(); Debug.Print(data); outputText.Text = data; } private void Button_Click(object sender, RoutedEventArgs e) { Debug.Print("sending: " + inputText.Text); port.WriteLine(inputText.Text); } } Now, here are the complicating factors: The laptop I'm working on has no serial ports, so I'm using a piece of software called Virtual Serial Port Emulator to setup a COM2. VSPE has worked admirably in the past, and it's not clear why it would only malfunction with .NET's SerialPort class, but I mention it just in case. When I hit the button on my form to send the data, my Hyperterminal window (connected on COM2) shows that the data is getting through. Yes, I disconnect Hyperterminal when I want to test my form's ability to read the port. I've tried opening the port before wiring up the event. No change. I've read through another post here where someone else is having a similar problem. None of that info has helped me in this case. EDIT: Here's the console version (modified from http://mark.michaelis.net/Blog/TheBasicsOfSystemIOPortsSerialPort.aspx): class Program { static SerialPort port; static void Main(string[] args) { port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); string text; do { text = Console.ReadLine(); port.Write(text + "\r\n"); } while (text.ToLower() != "q"); } public static void port_DataReceived(object sender, SerialDataReceivedEventArgs args) { string text = port.ReadExisting(); Console.WriteLine("received: " + text); } } This should eliminate any concern that it's a Threading issue (I think). This doesn't work either. Again, Hyperterminal reports the data sent through the port, but the console app doesn't seem to fire the DataReceived event. EDIT #2: I realized that I had two separate apps that should both send and receive from the serial port, so I decided to try running them simultaneously... If I type into the console app, the WPF app DataReceived event fires, with the expected threading error (which I know how to deal with). If I type into the WPF app, the console app DataReceived event fires, and it echoes the data. I'm guessing the issue is somewhere in my use of the VSPE software, which is set up to treat one serial port as both input and output. And through some weirdness of the SerialPort class, one instance of a serial port can't be both the sender and receiver. Anyway, I think it's solved.

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >