Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 394/1012 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Android: Having trouble creating a subclass of application to share data with multiple Activities

    - by Mike
    Hello, I just finished a couple of activities in my game and now I was going to start to wire them both up to use real game data, instead of the test data I was using just to make sure each piece worked. Since multiple Activities will need access to this game data, I started researching the best way to pass this data to my Activities. I know about using putExtra with intents, but my GameData class has quite a bit of data and not just simple key value pairs. Besides quite a few basic data types, it also has large arrays. I didn't really want to try and pass all that, unless I can pass the entire object, instead of just key/data pairs. I read the following post and thought it would be the way to go, but so far, I haven't got it to work. Android: How to declare global variables? I created a simple test app to try this method out, but it keeps crashing and my code seems to look the same as in the post above - except I changed the names. Here is the error I am getting. Can someone help me understand what I am doing wrong? 12-23 00:50:49.762: ERROR/AndroidRuntime(608): Caused by: java.lang.ClassCastException: android.app.Application It crashes on the following statement: GameData newGameData = ((GameData)getApplicationContext()); Here is my code: package mrk.examples.StaticGameData; import android.app.Application; public class GameData extends Application { private int intTest; GameData () { intTest = 0; } public int getIntTest(){ return intTest; } public void setIntTest(int value){ intTest = value; } } // My main activity package mrk.examples.StaticGameData; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.util.Log; public class StaticGameData extends Activity { int intStaticTest; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); GameData newGameData = ((GameData)getApplicationContext()); newGameData.setIntTest(0); intStaticTest = newGameData.getIntTest(); Log.d("StaticGameData", "Well: IntStaticTest = " + intStaticTest); newGameData.setIntTest(1); Log.d("StaticGameData", "Well: IntStaticTest = " + intStaticTest + " newGameData: " + newGameData.getIntTest()); Intent intentNew = new Intent(this, PassData2Activity.class); startActivity (intentNew); } } // My test Activity to see if it can access the data and its previous state from the last activity package mrk.examples.StaticGameData; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class PassData2Activity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); GameData gamedataPass = ((GameData)getApplicationContext()); Log.d("PassData2Activity", "IntTest = " + gamedataPass.getIntTest()); } } Below is the relevant portion of my manifest: <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".StaticGameData" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".PassData2Activity"></activity> </application> <application android:name=".GameData" android:icon="@drawable/icon" android:label="@string/app_name"> </application> Thanks in advance for helping me understand why this code is crashing. Also, if you think this is just the wrong approach to let multiple activities have access to the same data, please give your suggestion. Please keep in mind that I am talking about quite a few variables and some large arrays.

    Read the article

  • TDD - beginner problems and stumbling blocks

    - by Noufal Ibrahim
    While I've written unit tests for most of the code I've done, I only recently got my hands on a copy of TDD by example by Kent Beck. I have always regretted certain design decisions I made since they prevented the application from being 'testable'. I read through the book and while some of it looks alien, I felt that I could manage it and decided to try it out on my current project which is basically a client/server system where the two pieces communicate via. USB. One on the gadget and the other on the host. The application is in Python. I started off and very soon got entangled in a mess of rewrites and tiny tests which I later figured didn't really test anything. I threw away most of them and and now have a working application for which the tests have all coagulated into just 2. Based on my experiences, I have a few questions which I'd like to ask. I gained some information from http://stackoverflow.com/questions/1146218/new-to-tdd-are-there-sample-applications-with-tests-to-show-how-to-do-tdd but have some specific questions which I'd like answers to/discussion on. Kent Beck uses a list which he adds to and strikes out from to guide the development process. How do you make such a list? I initially had a few items like "server should start up", "server should abort if channel is not available" etc. but they got mixed and finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.). How do you handle rewrites? I initially selected a half duplex system based on named pipes so that I could develop the application logic on my own machine and then later add the USB communication part. It them moved to become a socket based thing and then moved from using raw sockets to using the Python SocketServer module. Each time things changed, I found that I had to rewrite considerable parts of the tests which was annoying. I'd figured that the tests would be a somewhat invariable guide during my development. They just felt like more code to handle. I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested and I worry that I'd miss that. This detracted from the whole red/green/refactor rhythm. Is this just lack of experience or am I doing something wrong? The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work? At the end of the session, I now have my client and server running with around 3 or 4 unit tests. It took me around a week to do it. I think I could have done it in a day if I were using the unit tests after code way. I fail to see the gain. I'm looking for comments and advice from people who have implemented large non trivial projects completely (or almost completely) using this methodology. It makes sense to me to follow the way after I have something already running and want to add a new feature but doing it from scratch seems to tiresome and not worth the effort. P.S. : Please let me know if this should be community wiki and I'll mark it like that. Update 0 : All the answers were equally helpful. I picked the one I did because it resonated with my experiences the most. Update 1: Practice Practice Practice!

    Read the article

  • Inconsistent accessibility

    - by user1312412
    I am getting the following error Inconsistent accessibility: parameter type 'Db.Form1.ConnectionString' is less accessible than method 'Db.Form1.BuildConnectionString(Db.Form1.ConnectionString)' //Name spaces using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.VisualBasic; using System.Collections; using System.Diagnostics; using System.Data.OleDb; using System.IO; using System.Drawing.Printing; // namespace Db { public partial class Form1 : Form { public Form1() { InitializeComponent(); } public void SetBusy() { this.Cursor = Cursors.WaitCursor; Application.DoEvents(); } public void SetFree() { this.Cursor = Cursors.Default; Application.DoEvents(); } //connection string into parts struct ConnectionString { public string Provider; public string DataSource; public string UserId; public string Password; public string Database; } //Declare public string BuildConnectionString(ConnectionString connStr) ------> getting error here { string[] parts = new string[5]; parts[0] = "Provider=" + connStr.Provider; parts[1] = "Data Source=" + connStr.DataSource; parts[2] = "User Id=" + connStr.UserId; parts[3] = "Password=" + connStr.Password; parts[4] = "Initial Catalog=" + connStr.Database; return string.Join(";", parts); } // settings public bool IsValidConnectionForPrinting() { SetBusy(); ConnectionString connStr = new ConnectionString(); connStr.Provider = cboProvider.Text; connStr.DataSource = cboDataSource.Text; connStr.UserId = txtUserId.Text; connStr.Password = txtPassword.Text; connStr.Database = cboDatabase.Text; //connection string to database string connectionString = BuildConnectionString(connStr); OleDbConnection conn = new OleDbConnection(connectionString); try { conn.Open(); OleDbCommand cmd = conn.CreateCommand; cmd.CommandType = CommandType.TableDirect; cmd.CommandText = "vw_pr_DL"; cmd.ExecuteScalar(); cmd.CommandText = "vw_pr_VR"; cmd.ExecuteScalar(); //cmd.CommandText = "vw_pr_VR" //cmd.ExecuteScalar() conn.Close(); } //Exception messages catch (Exception ex) { SetFree(); if (ex.Message.StartsWith("Invalid object name")) { MessageBox.Show(ex.Message.Replace("Invalid object name", "Table or view not found"), "Connection Test"); } else { MessageBox.Show(ex.GetBaseException().Message, "Connection Test"); } return false; } SetFree(); return true; } // when user click testbutton private void btnConnTest_Click(object sender, EventArgs e) { if (IsValidConnectionForPrinting()) { MessageBox.Show("Connection succeeded", "Connection Test"); } }

    Read the article

  • Qthread - trouble shutting down threads

    - by Bryan Greenway
    For the last few days, I've been trying out the new preferred approach for using QThreads without subclassing QThread. The trouble I'm having is when I try to shutdown a set of threads that I created. I regularly get a "Destroyed while thread is still running" message (if I'm running in Debug mode, I also get a Segmentation Fault dialog). My code is very simple, and I've tried to follow the examples that I've been able to find on the internet. My basic setup is as follows: I've a simple class that I want to run in a separate thread; in fact, I want to run 5 instances of this class, each in a separate thread. I have a simple dialog with a button to start each thread, and a button to stop each thread (10 buttons). When I click one of the "start" buttons, a new instance of the test class is created, a new QThread is created, a movetothread is called to get the test class object to the thread...also, since I have a couple of other members in the test class that need to move to the thread, I call movetothread a few additional times with these other items. Note that one of these items is a QUdpSocket, and although this may not make sense, I wanted to make sure that sockets could be moved to a separate thread in this fashion...I haven't tested the use of the socket in the thread at this point. Starting of the threads all seem to work fine. When I use the linux top command to see if the threads are created and running, they show up as expected. The problem occurs when I begin stopping the threads. I randomly (or it appears to be random) get the error described above. Class that is to run in separate thread: // Declaration class TestClass : public QObject { Q_OBJECT public: explicit TestClass(QObject *parent = 0); QTimer m_workTimer; QUdpSocket m_socket; Q_SIGNALS: void finished(); public Q_SLOTS: void start(); void stop(); void doWork(); }; // Implementation TestClass::TestClass(QObject *parent) : QObject(parent) { } void TestClass::start() { connect(&m_workTimer, SIGNAL(timeout()),this,SLOT(doWork())); m_workTimer.start(50); } void TestClass::stop() { m_workTimer.stop(); emit finished(); } void TestClass::doWork() { int j; for(int i = 0; i<10000; i++) { j = i; } } Inside my main app, code called to start the first thread (similar code exists for each of the other threads): mp_thread1 = new QThread(); mp_testClass1 = new TestClass(); mp_testClass1->moveToThread(mp_thread1); mp_testClass1->m_socket.moveToThread(mp_thread1); mp_testClass1->m_workTimer.moveToThread(mp_thread1); connect(mp_thread1, SIGNAL(started()), mp_testClass1, SLOT(start())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(quit())); connect(mp_testClass1, SIGNAL(finished()), mp_testClass1, SLOT(deleteLater())); connect(mp_testClass1, SIGNAL(finished()), mp_thread1, SLOT(deleteLater())); connect(this,SIGNAL(stop1()),mp_testClass1,SLOT(stop())); mp_thread1->start(); Also inside my main app, this code is called when a stop button is clicked for a specific thread (in this case thread 1): emit stop1(); Sometimes it appears that threads are stopped and destroyed without issue. Other times, I get the error described above. Any guidance would be greatly appreciated. Thanks, Bryan

    Read the article

  • gtk draw "expose-event" and redraw

    - by warem
    I want to use expose-event to draw something then update or redraw. That's to say, there are a drawing area and a button in window. When clicking button, the drawing area will be redrawn accordingly. My problems are Following code worked but it only had a drawing area no button. If I add the button(cancel the comment for button), nothing is drawn. What's the reason? In the following code, if I changed gtk_container_add (GTK_CONTAINER (box), canvas); to gtk_box_pack_start(GTK_BOX(box), canvas, FALSE, FALSE, 0);, nothing is drawn. Usually we use gtk_box_pack_start to add something into box. Why doesn't it work this time? The function build_ACC_axis refreshed drawing area and prepared for new draw. I google it but I didn't know if it worked. Could you please comment on it? If the source file is test.c, then compilation is gcc -o test test.c `pkg-config --cflags --libs gtk+-2.0` The code is below: #include <gtk/gtk.h> #include <glib.h> static void draw (GdkDrawable *d, GdkGC *gc) { /* Draw with GDK */ gdk_draw_line (d, gc, 0, 0, 50, 50); gdk_draw_line (d, gc, 50, 50, 50, 150); gdk_draw_line (d, gc, 50, 150, 0, 200); gdk_draw_line (d, gc, 200, 0, 150, 50); gdk_draw_line (d, gc, 150, 50, 150, 150); gdk_draw_line (d, gc, 150, 150, 200, 200); gdk_draw_line (d, gc, 50, 50, 150, 50); gdk_draw_line (d, gc, 50, 150, 150, 150); } static gboolean expose_cb (GtkWidget *canvas, GdkEventExpose *event, gpointer user_data) { GdkGC *gc; gc = gdk_gc_new (canvas->window); draw (canvas->window, gc); g_object_unref (gc); return FALSE; } void build_ACC_axis (GtkWidget *button, GtkWidget *widget) { GdkRegion *region; GtkWidget *canvas = g_object_get_data(G_OBJECT(widget), "plat_GA_canvas"); region = gdk_drawable_get_visible_region(canvas->window); gdk_window_invalidate_region(canvas->window, region, TRUE); gtk_widget_queue_draw(canvas); /* gdk_window_process_updates(canvas->window, TRUE); */ gdk_region_destroy (region); } int main (int argc, char **argv) { GtkWidget *window; GtkWidget *canvas, *box, *button; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); g_signal_connect (G_OBJECT (window), "destroy", G_CALLBACK (gtk_main_quit), NULL); box = gtk_vbox_new(FALSE, 0); gtk_container_add (GTK_CONTAINER (window), box); canvas = gtk_drawing_area_new (); g_object_set_data(G_OBJECT(window), "plat_GA_canvas", canvas); /* gtk_box_pack_start(GTK_BOX(box), canvas, FALSE, FALSE, 0); */ gtk_container_add (GTK_CONTAINER (box), canvas); g_signal_connect (G_OBJECT (canvas), "expose-event", G_CALLBACK (expose_cb), NULL); /* button = gtk_button_new_with_label ("ok"); gtk_box_pack_start(GTK_BOX(box), button, FALSE, FALSE, 0); |+ gtk_container_add (GTK_CONTAINER (box), button); +| gtk_signal_connect(GTK_OBJECT(button), "clicked", GTK_SIGNAL_FUNC(build_ACC_axis), window); */ gtk_widget_show_all (window); gtk_main (); }

    Read the article

  • Rails: (Devise) Two different methods for new users?

    - by neezer
    I have a Rails 3 app with authentication setup using Devise with the registerable module enabled. I want to have new users who sign up using our outside register form to use the full Devise registerable module, which is happening now. However, I also want the admin user to be able to create new users directly, bypassing (I think) Devise's registerable module. With registerable disabled, my standard UsersController works as I want it to for the admin user, just like any other Rail scaffold. However, now new users can't register on their own. With registerable enabled, my standard UsersController is never called for the new user action (calling Devise::RegistrationsController instead), and my CRUD actions don't seem to work at all (I get dumped back onto my root page with no new user created and no flash message). Here's the log from the request: Started POST "/users" for 127.0.0.1 at 2010-12-20 11:49:31 -0500 Processing by Devise::RegistrationsController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"18697r4syNNWHfMTkDCwcDYphjos+68rPFsaYKVjo8Y=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "role"=>"manager"}, "commit"=>"Create User"} SQL (0.9ms) ... User Load (0.6ms) SELECT "users".* FROM "users" WHERE ("users"."id" = 2) LIMIT 1 SQL (0.9ms) ... Redirected to http://test-app.local/ Completed 302 Found in 192ms ... but I am able to register new users through the outside form. How can I get both of these methods to work together, such that my admin user can manually create new users and guest users can register on their own? I have my Users controller setup for standard CRUD: class UsersController < ApplicationController load_and_authorize_resource def index @users = User.where("id NOT IN (?)", current_user.id) # don't display the current user in the users list; go to account management to edit current user details end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:notice] = "#{ @user.email } created." redirect_to users_path else render :action => 'new' end end def edit end def update params[:user].delete(:password) if params[:user][:password].blank? params[:user].delete(:password_confirmation) if params[:user][:password].blank? and params[:user][:password_confirmation].blank? if @user.update_attributes(params[:user]) flash[:notice] = "Successfully updated User." redirect_to users_path else render :action => 'edit' end end def delete end def destroy redirect_to users_path and return if params[:cancel] if @user.destroy flash[:notice] = "#{ @user.email } deleted." redirect_to users_path end end end And my routes setup as follows: TestApp::Application.routes.draw do devise_for :users devise_scope :user do get "/login", :to => "devise/sessions#new", :as => :new_user_session get "/logout", :to => "devise/sessions#destroy", :as => :destroy_user_session end resources :users do get :delete, :on => :member end authenticate :user do root :to => "application#index" end root :to => "devise/session#new" end

    Read the article

  • How to generate a random unique string with more than 2^30 combination. I also wanted to reverse the process. Is this possible?

    - by Yusuf S
    I have a string which contains 3 elements: a 3 digit code (example: SIN, ABD, SMS, etc) a 1 digit code type (example: 1, 2, 3, etc) a 3 digit number (example: 500, 123, 345) Example string: SIN1500, ABD2123, SMS3345, etc.. I wanted to generate a UNIQUE 10 digit alphanumeric and case sensitive string (only 0-9/a-z/A-Z is allowed), with more than 2^30 (about 1 billion) unique combination per string supplied. The generated code must have a particular algorithm so that I can reverse the process. For example: public static void main(String[] args) { String test = "ABD2123"; String result = generateData(test); System.out.println(generateOutput(test)); //for example, the output of this is: 1jS8g4GDn0 System.out.println(generateOutput(result)); //the output of this will be ABD2123 (the original string supplied) } What I wanted to ask is is there any ideas/examples/libraries in java that can do this? Or at least any hint on what keyword should I put on Google? I tried googling using the keyword java checksum, rng, security, random number, etc and also tried looking at some random number solution (java SecureRandom, xorshift RNG, java.util.zip's checksum, etc) but I can't seem to find one? Thanks! EDIT: My use case for this program is to generate some kind of unique voucher number to be used by specific customers. The string supplied will contains 3 digit code for company ID, 1 digit code for voucher type, and a 3 digit number for the voucher nominal. I also tried adding 3 random alphanumeric (so the final digit is 7 + 3 digit = 10 digit). This is what I've done so far, but the result is not very good (only about 100 thousand combination): public static String in ="somerandomstrings"; public static String out="someotherrandomstrings"; public static String encrypt(String kata) throws Exception { String result=""; String ina=in; String outa=out; Random ran = new Random(); Integer modulus=in.length(); Integer offset= ((Integer.parseInt(Utils.convertDateToString(new Date(), "SS")))+ran.nextInt(60))/2%modulus; result=ina.substring(offset, offset+1); ina=ina+ina; ina=ina.substring(offset, offset+modulus); result=result+translate(kata, ina, outa); return result; } EDIT: I'm sorry I forgot to put the "translate" function : public static String translate(String kata,String seq1, String seq2){ String result=""; if(kata!=null&seq1!=null&seq2!=null){ String[] a=kata.split(""); for (int j = 1; j < a.length; j++) { String b=a[j]; String[]seq1split=seq1.split(""); String[]seq2split=seq2.split(""); int hint=seq1.indexOf(b)+1; String sq=""; if(seq1split.length>hint) sq=seq1split[hint]; String sq1=""; if(seq2split.length>hint) sq1=seq2split[hint]; b=b.replace(sq, sq1); result=result+b; } } return result; }

    Read the article

  • Why is my NSURLConnection so slow?

    - by Bama91
    I have setup an NSURLConnection using the guidelines in Using NSURLConnection from the Mac OS X Reference Library, creating an NSMutableURLRequest as POST to a PHP script with a custom body to upload 20 MB of data (see code below). Note that the post body is what it is (line breaks and all) to exactly match an existing desktop implementation. When I run this in the iPhone simulator, the post is successful, but takes an order of magnitude longer than if I run the equivalent code locally on my Mac in C++ (20 minutes vs. 20 seconds, respectively). Any ideas why the difference is so dramatic? I understand that the simulator will give different results than the actual device, but I would expect at least similar results. Thanks const NSUInteger kDataSizePOST = 20971520; const NSString* kServerBDC = @"WWW.SOMEURL.COM"; const NSString* kUploadURL = @"http://WWW.SOMEURL.COM/php/test/upload.php"; const NSString* kUploadFilename = @"test.data"; const NSString* kUsername = @"testuser"; const NSString* kHostname = @"testhost"; srandom(time(NULL)); NSMutableData *uniqueData = [[NSMutableData alloc] initWithCapacity:kDataSizePOST]; for (unsigned int i = 0 ; i < kDataSizePOST ; ++i) { u_int32_t randomByte = ((random() % 95) + 32); [uniqueData appendBytes:(void*)&randomByte length:1]; } NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init]; [request setURL:[NSURL URLWithString:kUploadURL]]; [request setHTTPMethod:@"POST"]; NSString *boundary = @"aBcDd"; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; NSMutableData *postbody = [NSMutableData data]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[uniqueData length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: form-data; name=test; filename=%@", kUploadFilename] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithString:@";\nContent-Type: multipart/mixed;\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[NSData dataWithData:uniqueData]]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[kUsername length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: inline; name=Username;\n\r\n%@",kUsername] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"--%@\nContent-Size:%d",boundary,[kHostname length]] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\nContent-Disposition: inline; name=Hostname;\n\r\n%@",kHostname] dataUsingEncoding:NSUTF8StringEncoding]]; [postbody appendData:[[NSString stringWithFormat:@"\n--%@--",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [request setHTTPBody:postbody]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { _receivedData = [[NSMutableData data] retain]; } else { // Inform the user that the connection failed. } [request release]; [uniqueData release];

    Read the article

  • How to develop an english .com domain value rating algorithm?

    - by Tom
    I've been thinking about an algorithm that should rougly be able to guess the value of an english .com domain in most cases. For this to work I want to perform tests that consider the strengths and weaknesses of an english .com domain. A simple point based system is what I had in mind, where each domain property can be given a certain weight to factor it's importance in. I had these properties in mind: domain character length Eg. initially 20 points are added. If the domain has 4 or less characters, no points are substracted. For each extra character, one or more points are substracted on an exponential basis (the more characters, the higher the penalty). domain characters Eg. initially 20 points are added. If the domain is only alphabetic, no points are substracted. For each non-alhabetic character, X points are substracted (exponential increase again). domain name words Scans through a big offline english database, including non-formal speech, eg. words like "tweet" should be recognized. Question 1 : where can I get a modern list of english words for use in such application? Are these lists available for free? Are there lists like these with non-formal words? The more words are found per character, the more points are added. So, a domain with a lot of characters will still not get a lot of points. words hype-level I believe this is a tricky one, but this should be the cause to differentiate perfect but boring domains from perfect and interesting domains. For example, the following domain is probably not that valueable: www.peanutgalaxy.com The algorithm should identify that peanuts and galaxies are not very popular topics on the web. This is just an example. On the other side, a domain like www.shopdeals.com should ring a bell to the hype test, as shops and deals are quite popular on the web. My initial thought would be to see how often these keywords are references to on the web, preferably with some database. Question 2: is this logic flawed, or does this hype level test have merit? Question 3: are such "hype databases" available? Or is there anything else that could work offline? The problem with eg. a query to google is that it requires a lot of requests due to the many domains to be tested. domain name spelling mistakes Domains like "freemoneyz.com" etc. are generally (notice I am making a lot of assumptions in this post but that's necessary I believe) not valueable due to the spelling mistakes. Question 4: are there any offline APIs available to check for spelling mistakes, preferably in javascript or some database that I can use interact with myself. Or should a word list help here as well? use of consonants, vowels etc. A domain that is easy to pronounce (eg. Google) is usually much more valueable than one that is not (eg. Gkyld). Question 5: how does one test for such pronuncability? Do you check for consonants, vowels, etc.? What does a valueable domain have? Has there been any work in this field, where should I look? That is what I came up with, which leads me to my final two questions. Question 6: can you think of any more english .com domain strengths or weaknesses? Which? How would you implement these? Question 7: do you believe this idea has any merit or all, or am I too naive? Anything I should know, read or hear about? Suggestions/comments? Thanks!

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • Select dropdown with fixed width cutting off content in IE

    - by aaandre
    The issue: Some of the items in the select require more than the specified width of 145px in order to display fully. Firefox behavior: clicking on the select reveals the dropdown elements list adjusted to the width of the longest element. IE6 & IE7 behavior: clicking on the select reveals the dropdown elements list restricted to 145px width making it impossible to read the longer elements. The current UI requires us to fit this dropdown in 145px and have it host items with longer descriptions. Any advise on resolving the issue with IE? The top element should remain 145px wide even when the list is expanded. Thank you! The css: select.center_pull { background:#eeeeee none repeat scroll 0 0; border:1px solid #7E7E7E; color:#333333; font-size:12px; margin-bottom:4px; margin-right:4px; margin-top:4px; width:145px; } Here's the select input code (there's no definition for the backend_dropbox style at this time) <select id="select_1" class="center_pull backend_dropbox" name="select_1"> <option value="-1" selected="selected">Browse options</option> <option value="-1">------------------------------------</option> <option value="224">Option 1</option> <option value="234">Longer title for option 2</option> <option value="242">Very long and extensively descriptive title for option 3</option> </select> Full html page in case you want to quickly test in a browser: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>dropdown test</title> <style type="text/css"> <!-- select.center_pull { background:#eeeeee none repeat scroll 0 0; border:1px solid #7E7E7E; color:#333333; font-size:12px; margin-bottom:4px; margin-right:4px; margin-top:4px; width:145px; } --> </style> </head> <body> <p>Select width test</p> <form id="form1" name="form1" method="post" action=""> <select id="select_1" class="center_pull backend_dropbox" name="select_1"> <option value="-1" selected="selected">Browse options</option> <option value="-1">------------------------------------</option> <option value="224">Option 1</option> <option value="234">Longer title for option 2</option> <option value="242">Very long and extensively descriptive title for option 3</option> </select> </form> </body> </html>

    Read the article

  • Want to Receive dynamic length data from a message queue in IPC?

    - by user1089679
    Here I have to send and receive dynamic data using a SysV message queue. so in structure filed i have dynamic memory allocation char * because its size may be varies. so how can i receive this type of message at receiver side. Please let me know how can i send dynamic length of data with message queue. I am getting problem in this i posted my code below. send.c /*filename : send.c *To compile : gcc send.c -o send */ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> struct my_msgbuf { long mtype; char *mtext; }; int main(void) { struct my_msgbuf buf; int msqid; key_t key; static int count = 0; char temp[5]; int run = 1; if ((key = ftok("send.c", 'B')) == -1) { perror("ftok"); exit(1); } printf("send.c Key is = %d\n",key); if ((msqid = msgget(key, 0644 | IPC_CREAT)) == -1) { perror("msgget"); exit(1); } printf("Enter lines of text, ^D to quit:\n"); buf.mtype = 1; /* we don't really care in this case */ int ret = -1; while(run) { count++; buf.mtext = malloc(50); strcpy(buf.mtext,"Hi hello test message here"); snprintf(temp, sizeof (temp), "%d",count); strcat(buf.mtext,temp); int len = strlen(buf.mtext); /* ditch newline at end, if it exists */ if (buf.mtext[len-1] == '\n') buf.mtext[len-1] = '\0'; if (msgsnd(msqid, &buf, len+1, IPC_NOWAIT) == -1) /* +1 for '\0' */ perror("msgsnd"); if(count == 100) run = 0; usleep(1000000); } if (msgctl(msqid, IPC_RMID, NULL) == -1) { perror("msgctl"); exit(1); } return 0; } receive.c /* filename : receive.c * To compile : gcc receive.c -o receive */ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> struct my_msgbuf { long mtype; char *mtext; }; int main(void) { struct my_msgbuf buf; int msqid; key_t key; if ((key = ftok("send.c", 'B')) == -1) { /* same key as send.c */ perror("ftok"); exit(1); } if ((msqid = msgget(key, 0644)) == -1) { /* connect to the queue */ perror("msgget"); exit(1); } printf("test: ready to receive messages, captain.\n"); for(;;) { /* receive never quits! */ buf.mtext = malloc(50); if (msgrcv(msqid, &buf, 50, 0, 0) == -1) { perror("msgrcv"); exit(1); } printf("test: \"%s\"\n", buf.mtext); } return 0; }

    Read the article

  • SSRS 2008 printing single page renders different for print

    - by user270437
    I have a problem with SSRS 2008 reports rendering differently on the reporting server than the way it renders when you print the report. I’m trying to figure out to print a single page and have the print show that same records as I see on the report on the screen. As a test, I created a simple report with no headers or footers and just added a Tablix table to display the records (no groupings). My data set for this test displays 2 ¼ pages of records when I deploy it to our reporting services server and run it. If I click the print Icon and preview the report is 2 ¾ pages. I haven’t found anything searching on this so it makes me think it is something simple I’m missing. A basically want the report to render the same records on each page in Report Manager as it does when it prints, how do I accomplish this? (In response to answer posted by Chris)…If that is the case then it is disappointing. Customers are accustomed to WYSIWYG and will have a hard time understanding that, I imagine we will be getting a lot of support calls. This still leaves an issue. I tried using print preview and could not find any way to single out a page. If I select a page up front to print, or preview it renders different so I get different records. And if I preview the entire document, I can only print the entire document. You mentioned the Excel render; we have customers that will want that also. The problem I have found with Excel exports is that even a basic report winds up merging some cells and that messes up sorting. I’m going to try your tip about grouping to see if I can get a clean export to a page. It would have been nice if they would have created a property for certain controls like the tablix table called “ExcelSheet”. Then all you would have to do is give it a name and it would create a new sheet for each control with a name, the name becoming the sheet title. Thanks for the information you supplied it is very useful as I’m new to SSRS. If you know how I can Preview in print render and select individual pages to print from the render let me know. Update 02/19/2010 After testing this more I now realize it is just a bad design of Report managers print driver or a limitation because it is server based. The options work differently than Windows apps drivers, But I did find a work around. Here is the test I performed comparing Excel to Report Manager. I bring up a report that will render more than 1 page when printed. I then export to Excel, in Excel I select print preview. I can navigate the pages in preview and then select a single page like page 3. I can then print just page 3 without leaving print preview and it prints just like it rendered. I cannot do this using print in report manager. If I select print preview in report manager then try to print while in preview it always prints the entire document. However if I close out of print preview, I can then select page 3 and print it as rendered. It is just one additional step once you know what to do, but it took some time to figure it out.

    Read the article

  • XML Outputting - PHP vs JS vs Anything Else?

    - by itsphil
    Hi everyone, I am working on developing a Travel website which uses XML API's to get the data. However i am relatively new to XML and outputting it. I have been experimenting with using PHP to output a test XML file, but currently the furthest iv got is to only output a few records. As it the questions states i need to know which technology will be best for this project. Below iv included some points to take into consideration. The website is going to be a large sized, heavy traffic site (expedia/lastminute size) My skillset is PHP (intermediate/high skilled) & Javascript (intermediate/high skilled) Below is an example of the XML that the API is outputting: <?xml version="1.0"?> <response method="###" success="Y"> <errors> </errors> <request> <auth password="test" username="test" /> <method action="###" sitename="###" /> </request> <results> <line id="6" logourl="###" name="Line 1" smalllogourl="###"> <ships> <ship id="16" name="Ship 1" /> <ship id="453" name="Ship 2" /> <ship id="468" name="Ship 3" /> <ship id="356" name="Ship 4" /> </ships> </line> <line id="63" logourl="###" name="Line 2" smalllogourl="###"> <ships> <ship id="492" name="Ship 1" /> <ship id="454" name="Ship 2" /> <ship id="455" name="Ship 3" /> <ship id="421" name="Ship 4" /> <ship id="401" name="Ship 5" /> <ship id="404" name="Ship 6" /> <ship id="405" name="Ship 7" /> <ship id="406" name="Ship 8" /> <ship id="407" name="Ship 9" /> <ship id="408" name="Ship 10" /> </ships> </line> <line id="41" logourl="###"> <ships> <ship id="229" name="Ship 1" /> <ship id="230" name="Ship 2" /> <ship id="231" name="Ship 3" /> <ship id="445" name="Ship 4" /> <ship id="570" name="Ship 5" /> <ship id="571" name="Ship 6" /> </ships> </line> </results> </response> If possible when suggesting which technlogy is best for this project, if you could provide some getting started guides or any information would be very much appreciated. Thank you for taking the time to read this.

    Read the article

  • Windows App. Thread Aborting Issue

    - by Patrick
    I'm working on an application that has to make specific decisions based on files that are placed into a folder being watched by a file watcher. Part of this decision making process involves renaming files before moving them off to another folder to be processed. Since I'm working with files of all different sizes I created an object that checks the file in a seperate thread to verify that it is "available" and when it is it fires an event. When I run the rename code from inside this available event it works. public void RenameFile_Test() { string psFilePath = @"C:\File1.xlsx"; tgt_File target = new FileObject(psFilePath); target.FileAvailable += new FileEventHandler(OnFileAvailable); target.FileUnAvailable += new FileEventHandler(OnFileUnavailable); } private void OnFileAvailable(object source, FileEventArgs e) { ((FileObject)source).RenameFile(@"C:\File2.xlsx"); } The problem I'm running into is that when the extensions are different from the source file and the rename to file I am making a call to a conversion factory that returns a factory object based on the type of conversion and then converts the file accordingly before doing the rename. When I run that particular piece of code in unit test it works, the factory object is returned, and the conversion happens correctly. But when I run it within the process I get up to the... moExcelApp = new Application(); part of converting an .xls or .xlsx to a .csv and i get a "Thread was being Aborted" error. Any thoughts? Update: There is a bit more information and a bit of map of how the application works currently. Client Application running FSW On File Created event Creates a FileObject passing in the path of the file. On construction the file is validated: if file exists is true then, Thread toAvailableCheck = new Thread(new ThreadStart(AvailableCheck)); toAvailableCheck.Start(); The AvailableCheck Method repeatedly tries to open a streamreader to the file until the reader is either created or the number of attempts times out. If the reader is opened, it fires the FileAvailable event, if not it fires the FileUnAvailable event, passing back itself in the event. The client application is wired to catch those events from inside the Oncreated event of the FSW. the OnFileAvailable method then calls the rename functionality which contains the excel interop call. If the file is being renamed (not converted, extensions stay the same) it does a move to change the name from the old file name to the new, and if its a conversion it runs a conversion factory object which returns the correct type of conversion based on the extensions of the source file and the destination file name. If it is a simple rename it works w/o a problem. If its a conversion (which is the XLS to CSV object that is returned as a part of the factory) the very first thing it does is create a new application object. That is where the application bombs. When i test the factory and conversion/rename process outside of the thread and in its own unit test the process works w/o a problem. Update: I tested the Excel Interop inside a thread by doing this: [TestMethod()] public void ExcelInteropTest() { Thread toExcelInteropThreadTest = new Thread(new ThreadStart(Instantiate_App)); toExcelInteropThreadTest.Start(); } private void Instantiate_App() { Application moExcelApp = new Application(); moExcelApp.Quit(); } And on the line where the application is instatntiated I got the 'A first chance exception of type 'System.Threading.ThreadAbortException' error. So I added; toExcelInteropThreadTest.SetApartmentState(ApartmentState.MTA); after the thread instantiation and before the thread start call and still got the same error. I'm getting the notion that I'm going to have to reconsider the design.

    Read the article

  • Java JNI leak in c++ process.

    - by user662056
    Hi all.. I am beginner in Java. My problem is: I am calling a Java class's method from c++. For this i am using JNI. Everythings works correct, but i have some memory LEAKS in the process of c++ program... So.. i did simple example.. 1) I create a java machine (jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args);) 2) then i take a pointer on java class (jclass cls = env-FindClass("test_jni")); 3) after that i create a java class object object, by calling the constructor (testJavaObject = env-NewObject(cls, testConstruct);) AT THIS very moment in the process of c++ program is allocated 10 MB of memory 4) Next i delete the class , the object, and the Java Machine .. AT THIS very moment the 10 MB of memory are not free ................. So below i have a few lines of code c++ program void main() { { //Env JNIEnv *env; // java virtual machine JavaVM *jvm; JavaVMOption* options = new JavaVMOption[1]; //class paths options[0].optionString = "-Djava.class.path=C:/Sun/SDK/jdk/lib;D:/jms_test/java_jni_leak;"; // other options JavaVMInitArgs vm_args; vm_args.version = JNI_VERSION_1_6; vm_args.options = options; vm_args.nOptions = 1; vm_args.ignoreUnrecognized = false; // alloc part of memory (for test) before CreateJavaVM char* testMem0 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem0[i] = 'a'; // create java VM jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args); // alloc part of memory (for test) after CreateJavaVM char* testMem1 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem1[i] = 'b'; //Creating java virtual machine jclass cls = env->FindClass("test_jni"); // Id of a class constructor jmethodID testConstruct = env->GetMethodID(cls, "<init>", "()V"); // The Java Object // Calling the constructor, is allocated 10 MB of memory in c++ process jobject testJavaObject = env->NewObject(cls, testConstruct); // function DeleteLocalRef, // In this very moment memory not free env->DeleteLocalRef(testJavaObject); env->DeleteLocalRef(cls); // 1!!!!!!!!!!!!! res = jvm->DestroyJavaVM(); delete[] testMem0; delete[] testMem1; // In this very moment memory not free. TO /// } int gg = 0; } java class (it just allocs some memory) import java.util.*; public class test_jni { ArrayList<String> testStringList; test_jni() { System.out.println("start constructor"); testStringList = new ArrayList<String>(); for(int i = 0; i < 1000000; ++i) { // ??????? ?????? testStringList.add("TEEEEEEEEEEEEEEEEST"); } } } process memory view, after crating javaVM and java object: testMem0 and testMem1 - test memory, that's allocated by c++. ************** testMem0 ************** JNI_CreateJavaVM ************** testMem1 ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** process memory view, after destroy javaVM and delete ref on java object: testMem0 and testMem1 are deleted to; ************** JNI_CreateJavaVM ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** So testMem0 and testMem1 is deleted, But JavaVM and Java object not.... Sow what i do wrong... and how i can free memory in the c++ process program.

    Read the article

  • Get backreferences values and modificate these values

    - by roasted
    Could you please explain why im not able to get values of backreferences from a matched regex result and apply it some modification before effective replacement? The expected result is replacing for example string ".coord('X','Y')" by "X * Y". But if X to some value, divide this value by 2 and then use this new value in replacement. Here the code im currently testing: See /*>>1<<*/ & /*>>2<<*/ & /*>>3<<*/, this is where im stuck! I would like to be able to apply modification on backrefrences before replacement depending of backreferences values. Difference between /*>>2<<*/ & /*>>3<<*/ is just the self call anonymous function param The method /*>>2<<*/ is the expected working solution as i can understand it. But strangely, the replacement is not working correctly, replacing by alias $1 * $2 and not by value...? You can test the jsfiddle //string to test ".coord('125','255')" //array of regex pattern and replacement //just one for the example //for this example, pattern matching alphanumerics is not necessary (only decimal in coord) but keep it as it var regexes = [ //FORMAT is array of [PATTERN,REPLACEMENT] /*.coord("X","Y")*/ [/\.coord\(['"]([\w]+)['"],['"]?([\w:\.\\]+)['"]?\)/g, '$1 * $2'] ]; function testReg(inputText, $output) { //using regex for (var i = 0; i < regexes.length; i++) { /*==>**1**/ //this one works as usual but dont let me get backreferences values $output.val(inputText.replace(regexes[i][0], regexes[i][2])); /*==>**2**/ //this one should works as i understand it $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { $1 = checkReplace(match, $1, $2, $3, $4); //here want using $1 modified value in replacement return regexes[i][3]; })); /*==>**3**/ //this one is just a test by self call anonymous function $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { $1 = checkReplace(match, $1, $2, $3, $4); //here want using $1 modified value in replacement return regexes[i][4]; }())); inputText = $output.val(); } } function checkReplace(match, $1, $2, $3, $4) { console.log(match + ':::' + $1 + ':::' + $2 + ':::' + $3 + ':::' + $4); //HERE i should be able if lets say $1 > 200 divide it by 2 //then returning $1 value if($1 > 200) $1 = parseInt($1 / 2); return $1; }? Sure I'm missing something, but cannot get it! Thanks for your help, regards. EDIT WORKING METHOD: Finally get it, as mentionned by Eric: The key thing is that the function returns the literal text to substitute, not a string which is parsed for backreferences.?? JSFIDDLE So complete working code: (please note as pattern replacement will change for each matched pattern and optimisation of speed code is not an issue here, i will keep it like that) $('#btn').click(function() { testReg($('#input').val(), $('#output')); }); //array of regex pattern and replacement //just one for the example var regexes = [ //FORMAT is array of [PATTERN,REPLACEMENT] /*.coord("X","Y")*/ [/\.coord\(['"]([\w]+)['"],['"]?([\w:\.\\]+)['"]?\)/g, '$1 * $2'] ]; function testReg(inputText, $output) { //using regex for (var i = 0; i < regexes.length; i++) { $output.val(inputText.replace(regexes[i][0], function(match, $1, $2, $3, $4) { var checkedValues = checkReplace(match, $1, $2, $3, $4); $1 = checkedValues[0]; $2 = checkedValues[1]; regexes[i][1] = regexes[i][1].replace('$1', $1).replace('$2', $2); return regexes[i][1]; })); inputText = $output.val(); } } function checkReplace(match, $1, $2, $3, $4) { console.log(match + ':::' + $1 + ':::' + $2 + ':::' + $3 + ':::' + $4); if ($1 > 200) $1 = parseInt($1 / 2); if ($2 > 200) $2 = parseInt($2 / 2); return [$1,$2]; }

    Read the article

  • Testing shared memory ,strange thing happen

    - by barfatchen
    I have 2 program compiled in 4.1.2 running in RedHat 5.5 , It is a simple job to test shared memory , shmem1.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_EXCL | O_RDWR), (S_IREAD | S_IWRITE))) > 0 ) { first = 1; /* We are the first instance */ } else if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } if(first) { for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); shm_unlink(STATE_FILE); exit(0); }//main And shmem2.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } ftruncate(shm_fd, sizeof(SHARED_VAR)); if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } int idx ; for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); exit(0); } After compiled : gcc shmem1.c -lpthread -lrt -o shmem1.exe gcc shmem2.c -lpthread -lrt -o shmem2.exe And Run both program almost at the same time with 2 terminal : [test]$ ./shmem1.exe First creation of the shm. Setting up default values conf->heartbeat=(840825951) [test]$ ./shmem2.exe conf->heartbeat=(1215083817) I feel confused !! since shmem1.c is a loop 1,000,000,000 times , how can it be possible to have a answer like 840,825,951 ? I run shmem1.exe and shmem2.exe this way,most of the results are conf-heartbeat will larger than 1,000,000,000 , but seldom and randomly , I will see result conf-heartbeat will lesser than 1,000,000,000 , either in shmem1.exe or shmem2.exe !! if run shmem1.exe only , it is always print 1,000,000,000 , my question is , what is the reason cause conf-heartbeat=(840825951) in shmem1.exe ? Update: Although not sure , but I think I figure it out what is going on , If shmem1.exe run 10 times for example , then conf-heartbeat = 10 , in this time shmem1.exe take a rest and then back , shmem1.exe read from shared memory and conf-heartbeat = 8 , so shmem1.exe will continue from 8 , why conf-heartbeat = 8 ? I think it is because shmem2.exe update the shared memory data to 8 , shmem1.exe did not write 10 back to shared memory before it took a rest ....that is just my theory... i don't know how to prove it !!

    Read the article

  • mysql connect error issue

    - by Alex
    I've php page which update Mysql Db. I don't understand why my following php code is saying that "Could not update marker. No database selected". Strange!! can you please tell me why it's showing error message. Thanks. Php code: <?php // database settings $db_username = 'root'; $db_password = ''; $db_name = 'parkool'; $db_host = 'localhost'; //mysqli $mysqli = new mysqli($db_host, $db_username, $db_password, $db_name); if (mysqli_connect_errno()) { header('HTTP/1.1 500 Error: Could not connect to db!'); exit(); } ################ Save & delete markers ################# if($_POST) //run only if there's a post data { //make sure request is comming from Ajax $xhr = $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest'; if (!$xhr){ header('HTTP/1.1 500 Error: Request must come from Ajax!'); exit(); } // get marker position and split it for database $mLatLang = explode(',',$_POST["latlang"]); $mLat = filter_var($mLatLang[0], FILTER_VALIDATE_FLOAT); $mLng = filter_var($mLatLang[1], FILTER_VALIDATE_FLOAT); $mName = filter_var($_POST["name"], FILTER_SANITIZE_STRING); $mAddress = filter_var($_POST["address"], FILTER_SANITIZE_STRING); $mId = filter_var($_POST["id"], FILTER_SANITIZE_STRING); /*$result = mysql_query("SELECT id FROM test.markers WHERE test.markers.lat=$mLat AND test.markers.lng=$mLng"); if (!$result) { echo 'Could not run query: ' . mysql_error(); exit; } $row = mysql_fetch_row($result); $id=$row[0];*/ //$output = '<h1 class="marker-heading">'.$mId.'</h1><p>'.$mAddress.'</p>'; //exit($output); //Update Marker if(isset($_POST["update"]) && $_POST["update"]==true) { $results = mysql_query("UPDATE parkings SET latitude = '$mLat', longitude = '$mLng' WHERE locId = '94' "); if (!$results) { //header('HTTP/1.1 500 Error: Could not Update Markers! $mId'); echo "coudld not update marker." . mysql_error(); exit(); } exit("Done!"); } $output = '<h1 class="marker-heading">'.$mName.'</h1><p>'.$mAddress.'</p>'; exit($output); } ############### Continue generating Map XML ################# //Create a new DOMDocument object $dom = new DOMDocument("1.0"); $node = $dom->createElement("markers"); //Create new element node $parnode = $dom->appendChild($node); //make the node show up // Select all the rows in the markers table $results = $mysqli->query("SELECT * FROM parkings WHERE 1"); if (!$results) { header('HTTP/1.1 500 Error: Could not get markers!'); exit(); } //set document header to text/xml header("Content-type: text/xml"); // Iterate through the rows, adding XML nodes for each while($obj = $results->fetch_object()) { $node = $dom->createElement("marker"); $newnode = $parnode->appendChild($node); $newnode->setAttribute("name",$obj->name); $newnode->setAttribute("locId",$obj->locId); $newnode->setAttribute("address", $obj->address); $newnode->setAttribute("latitude", $obj->latitude); $newnode->setAttribute("longitude", $obj->longitude); //$newnode->setAttribute("type", $obj->type); } echo $dom->saveXML();

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Building a jQuery Plug-in to make an HTML Table scrollable

    - by Rick Strahl
    Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data. Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this: <div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;"> <table id="table" style="width: 100%" class="blackborder" > <thead> <tr class="gridheader"> <th>Column 1</th> <th>Column 2</th> <th>Column 3</th> <th >Column 4</th> </tr> </thead> <tbody> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> … </tbody> </table> </div> </div> that won't give a very satisfying visual experience: Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly. What's out there? Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them: Flexigrid (can work of any table as well as with AJAX data) jQuery Scrollable Table Plug-in (feature similar to what I need but not quite) jqGrid (mostly an Ajax Grid which is very powerful and works very well) But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling. Building the makeTableScrollable() Plug-in To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right? Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth. The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this: To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector: $("#table").makeTableScrollable( { cssClass:"blackborder"} ); Without much further ado, here's the short code for the plug-in: (function ($) { $.fn.makeTableScrollable = function (options) { return this.each(function () { var $table = $(this); var opt = { // height of the table height: "250px", // right padding added to support the scrollbar rightPadding: "10px", // cssclass used for the wrapper div cssClass: "" } $.extend(opt, options); var $thead = $table.find("thead"); var $ths = $thead.find("th"); var id = $table.attr("id"); var cssClass = $table.attr("class"); if (!id) id = "_table_" + new Date().getMilliseconds().ToString(); $table.width("+=" + opt.rightPadding); $table.css("border-width", 0); // add a column to all rows of the table var first = true; $table.find("tr").each(function () { var row = $(this); if (first) { row.append($("<th>").width(opt.rightPadding)); first = false; } else row.append($("<td>").width(opt.rightPadding)); }); // force full sizing on each of the th elemnts $ths.each(function () { var $th = $(this); $th.css("width", $th.width()); }); // Create the table wrapper div var $tblDiv = $("<div>").css({ position: "relative", overflow: "hidden", overflowY: "scroll" }) .addClass(opt.cssClass); var width = $table.width(); $tblDiv.width(width).height(opt.height) .attr("id", id + "_wrapper") .css("border-top", "none"); // Insert before $tblDiv $tblDiv.insertBefore($table); // then move the table into it $table.appendTo($tblDiv); // Clone the div for header var $hdDiv = $tblDiv.clone(); $hdDiv.empty(); var width = $table.width(); $hdDiv.attr("style", "") .css("border-bottom", "none") .width(width) .attr("id", id + "_wrapper_header"); // create a copy of the table and remove all children var $newTable = $($table).clone(); $newTable.empty() .attr("id", $table.attr("id") + "_header"); $thead.appendTo($newTable); $hdDiv.insertBefore($tblDiv); $newTable.appendTo($hdDiv); $table.css("border-width", 0); }); } })(jQuery); Oh sweet spaghetti code :-) The code starts out by dealing the parameters that can be passed in the options object map: height The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px. rightPadding The padding that is added to the right of the table to account for the scrollbar. Creates a column of this width and injects it into the table. If too small the rightmost column might get truncated. if too large the empty column might show. cssClass The CSS class of the wrapping container that appears to wrap the table. If you want a border around your table this class should probably provide it since the plug-in removes the table border. The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>. I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small. Disclaimer: Your mileage may vary A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy. I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows. There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header. The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly. The plug-in has no dependencies other than jQuery. Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately. Resources Download Sample and Plug-in code Latest version in the West Wind Web & AJAX Toolkit Repository © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML  ASP.NET  

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • CodePlex Daily Summary for Saturday, November 12, 2011

    CodePlex Daily Summary for Saturday, November 12, 2011Popular ReleasesDynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection: All classes which facilitate your dynamic pagination in Silverlight or WPF !Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Quick Performance Monitor: Version 1.8.0: Now you can add multiple counters/instances at once! Yes, I know its overdue...CODE Framework: 4.0.11110.0: Various minor fixes and tweaks.Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Player Framework by Microsoft: HTML5 Player Framework 1.0: Additional DownloadsHTML5 Player Framework Examples - This is a set of examples showing how to setup and initialize the HTML5 Player Framework. This includes examples of how to use the Player Framework with both the HTML5 video tag and Silverlight player. Note: Be sure to unblock the zip file before using. Note: In order to test Silverlight fallback in the included sample app, you need to run the html and xap files over http (e.g. over localhost). Silverlight Players - Visit the Silverlig...MapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.Ribbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2207.267): BUG FIXES: - Cannot add multiple JavaScript and Url under Actions - Cannot add <Or> node under <OrGroup> - Adding a rule under <Or> node put the new rule node at the wrong placeWF Designer Express: WF Designer Express v0.6: ?????(???????????) Multi Language(Now, Japanese and English)ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedNew Projects- Gemini - code branch: the branch for all Hzj_jie's code(MPLF) MediaPortal LoveFilm Plugin: MediaPortal LoveFilm (Love Film) Plugin. Browse films, add/remove from lists, streams etc._: _Batalha Naval em C#: Jogo de batalha naval para a disciplina de C#BusyHoliday - Making Outlook's Calendar Busy for Holidays: When users add holidays to their Outlook calendar these are added as an all day event. However, these are added as free time. With users in other countries, if they were to invite a user in a country with a public holiday, they won't be able to see that the user is on a public holiday or busy. BusyHoliday will update existing holidays as busy time. This may improve users ability to collaborate efficiently across geographical boundries. C# and MCU: C# and MCU and webCampo Minado C Sharp: C sharp source code of the game minefield.CP2011test: CP2011 TestDynamic PagedCollection (Silverlight / WPF Pagination): PagedCollection helps a developer to easily control the pagination of their collection. This collection doesnt need all datas in memory, it can be attached to RIA/SQL/XML data ... and load needed data one after the other. The PagedCollection can be used with Silverlight or WPF.Fast MVVM: The faster way to build your MVVM (Model, View, ViewModel) applications. It's a basic and educational way to learn and work with MVVM. We have more than just a pack of classes or templates. We have a new concept that turn it a little easier.Habitat: Core game services, frameworks and templates for desktop, mobile and web game developers.Historical Data Repository: Repository stores serializable timestamped data items in compressed files structured in a way that allows to quickly find data by time, quickly read and write data items in chronological order. Key features: - all data is stored under a single root folder; like file system, repository can contain a tree of folders each of which contains its own data stream; when reading, multiple streams can be combined into a single output stream maintaining chronological order; the tree structure can b...Immunity Buster: This game is for HIV. We will be making a virus which will help to spread the knowledge about the virus and have Fun among the peers.JoeCo Blog Application: JoeCo Blog Application for ECE 574jweis sso user center: jweis is a user center,distination is to use in a contry ,ssoKinect Shape Game Connect with Windows Phone: The classic Shape Game for Microsoft Kinect extended to allow players to join in the game and deploy shapes from their windows phone. Uses TCP sockets. For educational purposes. Free to download and distribute.Liekhus Behavior Driven DevExpress XAF: Using the BDD (Behavior Driven Development) techniques of SpecFlow and the EasyTest scriptability of DevExpress XAF (eXpress Application Framework), you can leverage test first style of application development in a speed never though imaginable.Linq to BBYOpen: C# Linq Provider to access the REST based BBYOpen services from BestBuy.Mi clima: This is the source code for the application "Mi clima" published in the Windows Phone Marketplace. MySqlQueryToolkit: This project bundles all of the common performance tests into a single handy user interface. Simply drop your query in and use the tools to make your query faster. Includes index suggestion, column data type and length suggestion and common profiling options.OData Validation Framework: Project to show how to validate data using client and server side OData/WCF Data ServicesOMR Table Data Syncronizer - Only Data: This tools compare and syncronize table data. No schema syncronizing included on this project. Data comparition is fastest! 1,000,000 data compare time is: ~200 ms.OrchardPo: This is the po file management module that is used on http://orchardproject.netResistance Calculator: A calculator for parallel and/or series resistanceSMS Team 5: h? th?ng g?i SMS t? d?ngSportsStore: Prueba en tfs de sports storeUDP Transport: This is an attempt at implementation of WCF UDP transport compliant with soa.udp OASIS Standard WebUtils: Biblioteca com funções genéricas úteis para uso com ASP NET MVC??a??e?a - Greek Government Open Data Program - Windows Mobile Client: ? efa?µ??? Open Government sa? d??e? ?µes? p??sßas? ap? t? ????t? sa? se ????? t??? ??µ???, ??e? t?? ap?f?se?? ?a? ??e? t?? dap??e? t?? ????????? ???t??? ?a? ???? t?? ?p??es??? t?? ????????? ??µ?s??? µ?s? t?? ??ße???t???? p?????µµat?? ??a??e?a.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >