Search Results

Search found 24721 results on 989 pages for 'int tostring'.

Page 270/989 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Which is the way to pass parameters in a drawableGameComponent in XNA 4.0?

    - by cad
    I have a small demo and I want to create a class that draws messages in screen like fps rate. I am reading a XNA book and they comment about GameComponents. I have created a class that inherits DrawableGameComponent public class ScreenMessagesComponent : Microsoft.Xna.Framework.DrawableGameComponent I override some methods like Initialize or LoadContent. But when I want to override draw I have a problem, I would like to pass some parameters to it. Overrided method does not allow me to pass parameters. public override void Draw(GameTime gameTime) { StringBuilder buffer = new StringBuilder(); buffer.AppendFormat("FPS: {0}\n", framesPerSecond); // Where get framesPerSecond from??? spriteBatch.DrawString(spriteFont, buffer.ToString(), fontPos, Color.Yellow); base.Draw(gameTime); } If I create a method with parameters, then I cannot override it and will not be automatically called: public void Draw(SpriteBatch spriteBatch, int framesPerSecond) { StringBuilder buffer = new StringBuilder(); buffer.AppendFormat("FPS: {0}\n", framesPerSecond); spriteBatch.DrawString(spriteFont, buffer.ToString(), fontPos, Color.Yellow); base.Draw(gameTime); } So my questions are: Is there a mechanism to pass parameter to a drawableGameComponent? What is the best practice? In general is a good practice to use GameComponents?

    Read the article

  • Class Mapping Error: 'T' must be a non-abstract type with a public parameterless constructor

    - by Amit Ranjan
    Hi, While mapping class i am getting error 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method. Below is my SqlReaderBase Class public abstract class SqlReaderBase<T> : ConnectionProvider { #region Abstract Methods protected abstract string commandText { get; } protected abstract CommandType commandType { get; } protected abstract Collection<IDataParameter> GetParameters(IDbCommand command); **protected abstract MapperBase<T> GetMapper();** #endregion #region Non Abstract Methods /// <summary> /// Method to Execute Select Queries for Retrieveing List of Result /// </summary> /// <returns></returns> public Collection<T> ExecuteReader() { //Collection of Type on which Template is applied Collection<T> collection = new Collection<T>(); // initializing connection using (IDbConnection connection = GetConnection()) { try { // creates command for sql operations IDbCommand command = connection.CreateCommand(); // assign connection to command command.Connection = connection; // assign query command.CommandText = commandText; //state what type of query is used, text, table or Sp command.CommandType = commandType; // retrieves parameter from IDataParameter Collection and assigns it to command object foreach (IDataParameter param in GetParameters(command)) command.Parameters.Add(param); // Establishes connection with database server connection.Open(); // Since it is designed for executing Select statements that will return a list of results // so we will call command's execute reader method that return a Forward Only reader with // list of results inside. using (IDataReader reader = command.ExecuteReader()) { try { // Call to Mapper Class of the template to map the data to its // respective fields MapperBase<T> mapper = GetMapper(); collection = mapper.MapAll(reader); } catch (Exception ex) // catch exception { throw ex; // log errr } finally { reader.Close(); reader.Dispose(); } } } catch (Exception ex) { throw ex; } finally { connection.Close(); connection.Dispose(); } } return collection; } #endregion } What I am trying to do is , I am executine some command and filling my class dynamically. The class is given below: namespace FooZo.Core { public class Restaurant { #region Private Member Variables private int _restaurantId = 0; private string _email = string.Empty; private string _website = string.Empty; private string _name = string.Empty; private string _address = string.Empty; private string _phone = string.Empty; private bool _hasMenu = false; private string _menuImagePath = string.Empty; private int _cuisine = 0; private bool _hasBar = false; private bool _hasHomeDelivery = false; private bool _hasDineIn = false; private int _type = 0; private string _restaurantImagePath = string.Empty; private string _serviceAvailableTill = string.Empty; private string _serviceAvailableFrom = string.Empty; public string Name { get { return _name; } set { _name = value; } } public string Address { get { return _address; } set { _address = value; } } public int RestaurantId { get { return _restaurantId; } set { _restaurantId = value; } } public string Website { get { return _website; } set { _website = value; } } public string Email { get { return _email; } set { _email = value; } } public string Phone { get { return _phone; } set { _phone = value; } } public bool HasMenu { get { return _hasMenu; } set { _hasMenu = value; } } public string MenuImagePath { get { return _menuImagePath; } set { _menuImagePath = value; } } public string RestaurantImagePath { get { return _restaurantImagePath; } set { _restaurantImagePath = value; } } public int Type { get { return _type; } set { _type = value; } } public int Cuisine { get { return _cuisine; } set { _cuisine = value; } } public bool HasBar { get { return _hasBar; } set { _hasBar = value; } } public bool HasHomeDelivery { get { return _hasHomeDelivery; } set { _hasHomeDelivery = value; } } public bool HasDineIn { get { return _hasDineIn; } set { _hasDineIn = value; } } public string ServiceAvailableFrom { get { return _serviceAvailableFrom; } set { _serviceAvailableFrom = value; } } public string ServiceAvailableTill { get { return _serviceAvailableTill; } set { _serviceAvailableTill = value; } } #endregion public Restaurant() { } } } For filling my class properties dynamically i have another class called MapperBase Class with following methods: public abstract class MapperBase<T> where T : new() { protected T Map(IDataRecord record) { T instance = new T(); string fieldName; PropertyInfo[] properties = typeof(T).GetProperties(); for (int i = 0; i < record.FieldCount; i++) { fieldName = record.GetName(i); foreach (PropertyInfo property in properties) { if (property.Name == fieldName) { property.SetValue(instance, record[i], null); } } } return instance; } public Collection<T> MapAll(IDataReader reader) { Collection<T> collection = new Collection<T>(); while (reader.Read()) { collection.Add(Map(reader)); } return collection; } } There is another class which inherits the SqlreaderBaseClass called DefaultSearch. Code is below public class DefaultSearch: SqlReaderBase<Restaurant> { protected override string commandText { get { return "Select Name from vw_Restaurants"; } } protected override CommandType commandType { get { return CommandType.Text; } } protected override Collection<IDataParameter> GetParameters(IDbCommand command) { Collection<IDataParameter> parameters = new Collection<IDataParameter>(); parameters.Clear(); return parameters; } protected override MapperBase<Restaurant> GetMapper() { MapperBase<Restaurant> mapper = new RMapper(); return mapper; } } But whenever I tried to build , I am getting error 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method. Even T here is Restaurant has a Parameterless Public constructor.

    Read the article

  • Project Euler #15

    - by Aistina
    Hey everyone, Last night I was trying to solve challenge #15 from Project Euler: Starting in the top left corner of a 2×2 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 20×20 grid? I figured this shouldn't be so hard, so I wrote a basic recursive function: const int gridSize = 20; // call with progress(0, 0) static int progress(int x, int y) { int i = 0; if (x < gridSize) i += progress(x + 1, y); if (y < gridSize) i += progress(x, y + 1); if (x == gridSize && y == gridSize) return 1; return i; } I verified that it worked for a smaller grids such as 2×2 or 3×3, and then set it to run for a 20×20 grid. Imagine my surprise when, 5 hours later, the program was still happily crunching the numbers, and only about 80% done (based on examining its current position/route in the grid). Clearly I'm going about this the wrong way. How would you solve this problem? I'm thinking it should be solved using an equation rather than a method like mine, but that's unfortunately not a strong side of mine. Update: I now have a working version. Basically it caches results obtained before when a n×m block still remains to be traversed. Here is the code along with some comments: // the size of our grid static int gridSize = 20; // the amount of paths available for a "NxM" block, e.g. "2x2" => 4 static Dictionary<string, long> pathsByBlock = new Dictionary<string, long>(); // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static long progress(long x, long y) { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // create a textual representation of the remaining // block, for use in the dictionary string block = (gridSize - x) + "x" + (gridSize - y); // if a same block has not been processed before if (!pathsByBlock.ContainsKey(block)) { // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // and cache the result! pathsByBlock[block] = i; } // self-explanatory :) return pathsByBlock[block]; } Calling it 20 times, for grids with size 1×1 through 20×20 produces the following output: There are 2 paths in a 1 sized grid 0,0110006 seconds There are 6 paths in a 2 sized grid 0,0030002 seconds There are 20 paths in a 3 sized grid 0 seconds There are 70 paths in a 4 sized grid 0 seconds There are 252 paths in a 5 sized grid 0 seconds There are 924 paths in a 6 sized grid 0 seconds There are 3432 paths in a 7 sized grid 0 seconds There are 12870 paths in a 8 sized grid 0,001 seconds There are 48620 paths in a 9 sized grid 0,0010001 seconds There are 184756 paths in a 10 sized grid 0,001 seconds There are 705432 paths in a 11 sized grid 0 seconds There are 2704156 paths in a 12 sized grid 0 seconds There are 10400600 paths in a 13 sized grid 0,001 seconds There are 40116600 paths in a 14 sized grid 0 seconds There are 155117520 paths in a 15 sized grid 0 seconds There are 601080390 paths in a 16 sized grid 0,0010001 seconds There are 2333606220 paths in a 17 sized grid 0,001 seconds There are 9075135300 paths in a 18 sized grid 0,001 seconds There are 35345263800 paths in a 19 sized grid 0,001 seconds There are 137846528820 paths in a 20 sized grid 0,0010001 seconds 0,0390022 seconds in total I'm accepting danben's answer, because his helped me find this solution the most. But upvotes also to Tim Goodman and Agos :) Bonus update: After reading Eric Lippert's answer, I took another look and rewrote it somewhat. The basic idea is still the same but the caching part has been taken out and put in a separate function, like in Eric's example. The result is some much more elegant looking code. // the size of our grid const int gridSize = 20; // magic. static Func<A1, A2, R> Memoize<A1, A2, R>(this Func<A1, A2, R> f) { // Return a function which is f with caching. var dictionary = new Dictionary<string, R>(); return (A1 a1, A2 a2) => { R r; string key = a1 + "x" + a2; if (!dictionary.TryGetValue(key, out r)) { // not in cache yet r = f(a1, a2); dictionary.Add(key, r); } return r; }; } // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static Func<long, long, long> progress = ((Func<long, long, long>)((long x, long y) => { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // self-explanatory :) return i; })).Memoize(); By the way, I couldn't think of a better way to use the two arguments as a key for the dictionary. I googled around a bit, and it seems this is a common solution. Oh well.

    Read the article

  • what is the problem in ATM machine program

    - by Have alook
    in this prigramm when the account number is uncorrect it should display a message to write a gain but when i wrote a gain by corrrect account number always it diplay the result of first account also there is aproblem in PIN number ,the use have only three time to try if he enter wrong numbe and if enter three times wrong it should stop the program but it complete to the last part I dont know why pleas help me this is my proram import java.util.*; class assignment2_70307{ public static void main(String args[]){ Scanner m=new Scanner(System.in); int i; i=0; int [] accountNo =new int[7] ;//declear the Accont number array accountNo [0] =1111; accountNo [1] =2222; accountNo [2] =3333; accountNo [3] =4444; accountNo [4] =5555; accountNo [5] =6666; accountNo [6] =7777; int [] PINno =new int[7]; //declear the PIN number array PINno [0] =1234; PINno [1] =5678; PINno [2] =9874; PINno [3] =6523; PINno [4] =1236; PINno [5] =4569; PINno [6] =8521; String [] CusomerNm =new String[7]; //dclear the customer name CusomerNm [0] ="Ali"; CusomerNm [1] ="Ahmed"; CusomerNm [2] ="Amal"; CusomerNm [3] ="Said"; CusomerNm [4] ="Rashid"; CusomerNm [5] ="Fatema"; CusomerNm [6] ="Mariam"; double [] Balance =new double[7]; //declear the Balane array Balance [0] =100.50; Balance [1] =5123.00; Balance [2] =12.00; Balance [3] =4569.00; Balance [4] =1020.25; Balance [5] =0.00; Balance [6] =44.10; System.out.println("Wellcome to mini ATM Machine"); int accountno,pino; accountno=0; pino=0; System.out.println("Please Enter your account number: or -1 to stop" ); accountno=m.nextInt(); if (accountno==accountNo[0]) System.out.print("Customer Name: "+CusomerNm [0]+ "\n" ); else if (accountno==accountNo[1]) System.out.print("Customer Name: "+CusomerNm [1]+ "\n" ); else if (accountno==accountNo[2]) System.out.print("Customer Name: "+CusomerNm [2]+ "\n" ); else if (accountno==accountNo[3]) System.out.print("Customer Name: "+CusomerNm [3]+ "\n" ); else if (accountno==accountNo[4]) System.out.print("Customer Name: "+CusomerNm [4]+ "\n" ); else if (accountno==accountNo[5]) System.out.print("Customer Name: "+CusomerNm [5]+ "\n" ); else if (accountno==accountNo[6]) System.out.print("Customer Name: "+CusomerNm [6]+ "\n" ); // else if (accountNo[0]==-1) //break; else { System.out.println("The account dose not exist,please try again"); //accountNo[i]=m.nextInt(); accountno=m.nextInt(); if(accountNo[i]==accountno) System.out.println("Customer Name: "+CusomerNm[i] ); else System.out.println("The account dose not exist,please try again"); accountno=m.nextInt(); System.out.println("Customer Name: "+CusomerNm[i] ); } System.out.print("Enter your PIN number:"); PINno[i]=m.nextInt(); if(PINno[i]==1234) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [0]+ "Rial"); //return 0; } else if(PINno[i]==5678) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [1]+ "Rial"); // return 1; } else if(PINno[i]==9874) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [2]+ "Rial"); // return 2; } else if(PINno[i]==6523) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [3]+ "Rial"); // return 3; } else if(PINno[i]==1236) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [4]+ "Rial"); // return 4; } else if(PINno[i]==4569) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [5]+ "Rial"); // return 5; } else if(PINno[i]==8521) { System.out.println(PINno[i]); System.out.println("Balance:"+Balance [6]+ "Rial"); // return 6; } else {System.out.println("try again"); //return 7; //if its wrong u can enter PIN number three times only for( i=0;i<2;i++) { System.out.println("enter pin again"); PINno[i]=m.nextInt(); String ss; //ss = "MAnal"; // goto ss ; } } //ss = "m"; int x; x=0; System.out.println("Enter the option from the list /n 1.Deposit /n 2.Withdraw /n 3.Balance"); x=m.nextInt(); double balance,amount; balance=0; amount=0; double deposit ,Withdraw; deposit=0; Withdraw=0; if (x==1){ System.out.println("Enter the amont you want to deposit:"+amount); amount=m.nextDouble(); Balance [i]=Balance [i]+amount; System.out.println("your balance ="+Balance [i]); } else if (x==2) { System.out.println("Enter the amont to withdraw:"); amount=m.nextDouble(); System.out.print(amount); if(Withdraw<=Balance [i]) { Balance [i]=Balance [i]-amount; System.out.println("your balance ="+Balance [i]); } else { System.out.println("sorry,please enter the amont less or equal your balance"); System.out.println(Balance [i]); } } else { if(x==1) { Balance [i]=Balance [i]+deposit; System.out.println("your current balance is :" +Balance [i]); } else { Balance [i]=Balance [i]-Withdraw; System.out.println("your current balance is :"+Balance [i]); } System.out.println("Thank you"); // err() } } }

    Read the article

  • Communicating between C# application and Android app via bluetooth

    - by Akki
    The android application acts as a server in this case. I have a main activity which creates a Thread to handle serverSocket and a different thread to handle the socket connection. I am using a uuid common to C# and android. I am using 32feet bluetooth library for C#. The errors i am facing are 1) My logcat shows this debug log Error while doing socket.connect()1 java.io.IOException: File descriptor in bad state Message: File descriptor in bad state Localized Message: File descriptor in bad state Received : Testing Connection Count of Thread is : 1 2) When i try to send something via C# app the second time, this exception is thrown: A first chance exception of type 'System.InvalidOperationException' occurred in System.dll System.InvalidOperationException: BeginConnect cannot be called while another asynchronous operation is in progress on the same Socket. at System.Net.Sockets.Socket.DoBeginConnect(EndPoint endPointSnapshot, SocketAddress socketAddress, LazyAsyncResult asyncResult) at System.Net.Sockets.Socket.BeginConnect(EndPoint remoteEP, AsyncCallback callback, Object state) at InTheHand.Net.Bluetooth.Msft.SocketBluetoothClient.BeginConnect(BluetoothEndPoint remoteEP, AsyncCallback requestCallback, Object state) at InTheHand.Net.Sockets.BluetoothClient.BeginConnect(BluetoothEndPoint remoteEP, AsyncCallback requestCallback, Object state) at InTheHand.Net.Sockets.BluetoothClient.BeginConnect(BluetoothAddress address, Guid service, AsyncCallback requestCallback, Object state) at BTSyncClient.Form1.connect() in c:\users\----\documents\visual studio 2010\Projects\TestClient\TestClient\Form1.cs:line 154 I only know android application programming and i designed the C# by learning bit and pieces. FYI, My android phone is galaxy s with ICS running on it.Please point out my mistakes.. Source codes : C# Code using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; using System.Net.Sockets; using InTheHand.Net.Bluetooth; using InTheHand.Windows.Forms; using InTheHand.Net.Sockets; using InTheHand.Net; namespace BTSyncClient { public partial class Form1 : Form { BluetoothClient myself; BluetoothDeviceInfo bTServerDevice; private Guid uuid = Guid.Parse("00001101-0000-1000-8000-00805F9B34FB"); bool isConnected; public Form1() { InitializeComponent(); if (BluetoothRadio.IsSupported) { myself = new BluetoothClient(); } } private void Form1_Load(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { connect(); } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { try { myself.GetStream().Close(); myself.Dispose(); } catch (Exception ex) { Console.Out.WriteLine(ex); } } private void button2_Click(object sender, EventArgs e) { SelectBluetoothDeviceDialog dialog = new SelectBluetoothDeviceDialog(); DialogResult result = dialog.ShowDialog(this); if(result.Equals(DialogResult.OK)){ bTServerDevice = dialog.SelectedDevice; } } private void callback(IAsyncResult ar) { String msg = (String)ar.AsyncState; if (ar.IsCompleted) { isConnected = myself.Connected; if (myself.Connected) { UTF8Encoding encoder = new UTF8Encoding(); NetworkStream stream = myself.GetStream(); if (!stream.CanWrite) { MessageBox.Show("Stream is not Writable"); } System.IO.StreamWriter mywriter = new System.IO.StreamWriter(stream, Encoding.UTF8); mywriter.WriteLine(msg); mywriter.Flush(); } else MessageBox.Show("Damn thing isnt connected"); } } private void connect() { try { if (bTServerDevice != null) { myself.BeginConnect(bTServerDevice.DeviceAddress, uuid, new AsyncCallback(callback) , message.Text); } } catch (Exception e) { Console.Out.WriteLine(e); } } } } Server Thread import java.io.IOException; import java.util.UUID; import android.bluetooth.BluetoothAdapter; import android.bluetooth.BluetoothServerSocket; import android.bluetooth.BluetoothSocket; import android.util.Log; public class ServerSocketThread extends Thread { private static final String TAG = "TestApp"; private BluetoothAdapter btAdapter; private BluetoothServerSocket serverSocket; private boolean stopMe; private static final UUID uuid = UUID.fromString("00001101-0000-1000-8000-00805F9B34FB"); //private static final UUID uuid = UUID.fromString("6e58c9d5-b0b6-4009-ad9b-fd9481aef9b3"); private static final String SERVICE_NAME = "TestService"; public ServerSocketThread() { stopMe = false; btAdapter = BluetoothAdapter.getDefaultAdapter(); try { serverSocket = btAdapter.listenUsingRfcommWithServiceRecord(SERVICE_NAME, uuid); } catch (IOException e) { Log.d(TAG,e.toString()); } } public void signalStop(){ stopMe = true; } public void run(){ Log.d(TAG,"In ServerThread"); BluetoothSocket socket = null; while(!stopMe){ try { socket = serverSocket.accept(); } catch (IOException e) { break; } if(socket != null){ AcceptThread newClientConnection = new AcceptThread(socket); newClientConnection.start(); } } Log.d(TAG,"Server Thread now dead"); } // Will cancel the listening socket and cause the thread to finish public void cancel(){ try { serverSocket.close(); } catch (IOException e) { } } } Accept Thread import java.io.IOException; import java.io.InputStream; import java.util.Scanner; import android.bluetooth.BluetoothSocket; import android.util.Log; public class AcceptThread extends Thread { private BluetoothSocket socket; private String TAG = "TestApp"; static int count = 0; public AcceptThread(BluetoothSocket Socket) { socket = Socket; } volatile boolean isError; String output; String error; public void run() { Log.d(TAG, "AcceptThread Started"); isError = false; try { socket.connect(); } catch (IOException e) { Log.d(TAG,"Error while doing socket.connect()"+ ++count); Log.d(TAG, e.toString()); Log.d(TAG,"Message: "+e.getLocalizedMessage()); Log.d(TAG,"Localized Message: "+e.getMessage()); isError = true; } InputStream in = null; try { in = socket.getInputStream(); } catch (IOException e) { Log.d(TAG,"Error while doing socket.getInputStream()"); Log.d(TAG, e.toString()); Log.d(TAG,"Message: "+e.getLocalizedMessage()); Log.d(TAG,"Localized Message: "+e.getMessage()); isError = true; } Scanner istream = new Scanner(in); if (istream.hasNextLine()) { Log.d(TAG, "Received : "+istream.nextLine()); Log.d(TAG,"Count of Thread is : " + count); } istream.close(); try { in.close(); } catch (IOException e) { Log.d(TAG,"Error while doing in.close()"); Log.d(TAG, e.toString()); Log.d(TAG,"Message: "+e.getLocalizedMessage()); Log.d(TAG,"Localized Message: "+e.getMessage()); isError = true; } try { socket.close(); } catch (IOException e) { Log.d(TAG,"Error while doing socket.close()"); Log.d(TAG, e.toString()); Log.d(TAG,"Message: "+e.getLocalizedMessage()); Log.d(TAG,"Localized Message: "+e.getMessage()); isError = true; } } }

    Read the article

  • How to set a imageButton is an RSS

    - by L?c Song
    I have a feed_layout.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <LinearLayout android:baselineAligned="false" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="horizontal" > <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:onClick="homeImageButton" android:scaleType="fitStart" android:src="@drawable/home" android:tag="1" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="thegioiImageButton" android:src="@drawable/home" android:tag="2" /> </LinearLayout> </LinearLayout> <LinearLayout android:baselineAligned="false" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="horizontal" > <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="giaitriImageButton" android:src="@drawable/home" android:tag="3" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="thethaoImageButton" android:src="@drawable/home" android:tag="4" /> </LinearLayout> </LinearLayout> <LinearLayout android:baselineAligned="false" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="horizontal" > <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="khoahocImageButton" android:src="@drawable/home" android:tag="5" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="xeImageButton" android:src="@drawable/home" android:tag="6" /> </LinearLayout> </LinearLayout> </LinearLayout> and feedActivity.java package com.dqh.vnexpressrssreader; import android.R.string; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.ImageButton; import android.widget.Toast; public class FeedActivity extends Activity { public String tagImg; private static final String TAG = "FeedActivity"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.feed_layout); } public void homeImageButton(View v) { ImageButton imageButtonClicked = (ImageButton)v; tagImg = imageButtonClicked.getTag().toString(); setTagImg(tagImg); String tt = getTagImg(); Log.d(TAG, "FeedId: " + tt); Intent intent = new Intent(getApplicationContext(), ItemsActivity.class); startActivityForResult(intent, 0); } public void thegioiImageButton(View v) { ImageButton imageButtonClicked = (ImageButton)v; tagImg = imageButtonClicked.getTag().toString(); //Log.d(TAG, "FeedId: " + imageButtonClicked.getTag()); Log.d(TAG, "FeedId: " + tagImg); Intent intent = new Intent(getApplicationContext(), ItemsActivity.class); startActivityForResult(intent, 0); } } and RssReader.java /** * */ package com.dqh.vnexpressrssreader.reader; import java.util.ArrayList; import java.util.List; import org.json.JSONException; import org.json.JSONObject; import com.dqh.vnexpressrssreader.FeedActivity; import com.dqh.vnexpressrssreader.NewsRssReaderDB; import com.dqh.vnexpressrssreader.util.RSSHandler; import com.dqh.vnexpressrssreader.util.Tintuc; import android.content.Context; import android.text.Html; import android.util.Log; /** * @author rob * */ public class RssReader { private final static String TAG = "RssReader"; private final static String BOLD_OPEN = "<B>"; private final static String BOLD_CLOSE = "</B>"; private final static String BREAK = "<BR>"; private final static String ITALIC_OPEN = "<I>"; private final static String ITALIC_CLOSE = "</I>"; private final static String SMALL_OPEN = "<SMALL>"; private final static String SMALL_CLOSE = "</SMALL>"; /** * This method defines a feed URL and then calles our SAX Handler to read the tintuc list * from the stream * * @return List<JSONObject> - suitable for the List View activity */ public static List<JSONObject> getLatestRssFeed(Context context) { NewsRssReaderDB newsRssReaderDB = new NewsRssReaderDB(context); List<Tintuc> tintucsFromDB = newsRssReaderDB.getLists(); return fillData(tintucsFromDB); } public static void getLatestRssFeed(Context context, String feed) { NewsRssReaderDB newsRssReaderDB = new NewsRssReaderDB(context); feed = "http://vnexpress.net/rss/the-gioi.rss"; //RSS 2 feed = "http://vnexpress.net/rss/the-thao.rss"; //RSS 3 feed = "http://vnexpress.net/rss/home.rss"; RSSHandler rh = new RSSHandler(); List<Tintuc> tintucs = rh.getLatestTintucs(feed); if ((tintucs != null) && (tintucs.size() > 0)) { for (Tintuc tintuc : tintucs) { if ((tintuc.getUrl() != null) && !newsRssReaderDB.checkUrlExist(tintuc.getUrl().toString())) { long tintucId = newsRssReaderDB.insertTintuc(tintuc); if (tintucId > 0) { Log.d(TAG, "saved tintucId: " + tintucId); } else { Log.e(TAG, "saved tintucId fail"); } } else { Log.e(TAG, "tintucs exist!"); } } } } /** * This method takes a list of Tintuc objects and converts them in to the * correct JSON format so the info can be processed by our list view * * @param tintucs - list<Tintuc> * @return List<JSONObject> - suitable for the List View activity */ private static List<JSONObject> fillData(List<Tintuc> tintucs) { List<JSONObject> items = new ArrayList<JSONObject>(); for (Tintuc tintuc : tintucs) { JSONObject current = new JSONObject(); try { buildJsonObject(tintuc, current); } catch (JSONException e) { Log.e("RSS ERROR", "Error creating JSON Object from RSS feed"); } items.add(current); } return items; } /** * This method takes a single Tintuc Object and converts it in to a single JSON object * including some additional HTML formating so they can be displayed nicely * * @param tintuc * @param current * @throws JSONException */ private static void buildJsonObject(Tintuc tintuc, JSONObject current) throws JSONException { String title = tintuc.getTieude(); String description = tintuc.getMota(); ///////////////////////// //////// 2 ///////////// String date = tintuc.getPubDate(); String imgLink = tintuc.getImgLink(); StringBuffer sb = new StringBuffer(); sb.append(BOLD_OPEN).append(title).append(BOLD_CLOSE); sb.append(BREAK); sb.append(description); sb.append(BREAK); sb.append(SMALL_OPEN).append(ITALIC_OPEN).append(date).append(ITALIC_CLOSE).append(SMALL_CLOSE); current.put("text", Html.fromHtml(sb.toString())); current.put("imageLink", imgLink); current.put("url", tintuc.getUrl().toString()); current.put("title", tintuc.getTieude()); } } I have 1 array RSS and I want each ImageButton is assigned a Rss??. I have attempt to call to FeedActivity from RSSReader but not be help me !

    Read the article

  • SocketChannel in Java sends data, but it doesn't get to destination application

    - by Peterson
    Hi Everybody, I'm suffering a lot to create a simple ChatServer in Java, using the NIO libraries. Wonder if someone could help me. I am doing that by using SocketChannel and Selector to handle multiple clients in a single thread. The problem is: I am able to accept new connections and get it's data, but when I try to send data back, the SocketChannel simply doesn't work. In the method write(), it returns a integer that is the same size of the data i'm passing to it, but the client never receives that data. Strangely, when I close the server application, the client receives the data. It's like the socketchannel maintains a buffer, and it only get flushed when I close the application. Here are some more details, to give you more information to help. I'm handling the events in this piece of code: private void run() throws IOException { ServerSocketChannel ssc = ServerSocketChannel.open(); // Set it to non-blocking, so we can use select ssc.configureBlocking( false ); // Get the Socket connected to this channel, and bind it // to the listening port this.serverSocket = ssc.socket(); InetSocketAddress isa = new InetSocketAddress( this.port ); serverSocket.bind( isa ); // Create a new Selector for selecting this.masterSelector = Selector.open(); // Register the ServerSocketChannel, so we can // listen for incoming connections ssc.register( masterSelector, SelectionKey.OP_ACCEPT ); while (true) { // See if we've had any activity -- either // an incoming connection, or incoming data on an // existing connection int num = masterSelector.select(); // If we don't have any activity, loop around and wait // again if (num == 0) { continue; } // Get the keys corresponding to the activity // that has been detected, and process them // one by one Set keys = masterSelector.selectedKeys(); Iterator it = keys.iterator(); while (it.hasNext()) { // Get a key representing one of bits of I/O // activity SelectionKey key = (SelectionKey)it.next(); // What kind of activity is it? if ((key.readyOps() & SelectionKey.OP_ACCEPT) == SelectionKey.OP_ACCEPT) { // Aceita a conexão Socket s = serverSocket.accept(); System.out.println( "LOG: Conexao TCP aceita de " + s.getInetAddress() + ":" + s.getPort() ); // Make sure to make it non-blocking, so we can // use a selector on it. SocketChannel sc = s.getChannel(); sc.configureBlocking( false ); // Registra a conexao no seletor, apenas para leitura sc.register( masterSelector, SelectionKey.OP_READ ); } else if ( key.isReadable() ) { SocketChannel sc = null; // It's incoming data on a connection, so // process it sc = (SocketChannel)key.channel(); // Verifica se a conexão corresponde a um cliente já existente if((clientsMap.getClient(key)) != null){ boolean closedConnection = !processIncomingClientData(key); if(closedConnection){ int id = clientsMap.getClient(key); closeClient(id); } } else { boolean clientAccepted = processIncomingDataFromNewClient(key); if(!clientAccepted){ // Se o cliente não foi aceito, sua conexão é simplesmente fechada sc.socket().close(); sc.close(); key.cancel(); } } } } // We remove the selected keys, because we've dealt // with them. keys.clear(); } } This piece of code is simply handles new clients that wants to connect to the chat. So, a client makes a TCP connection to the server, and once it gets accepted, it sends data to the server following a simply text protocol, informing his id and asking to get registrated to the server. I handle this in the method processIncomingDataFromNewClient(key). I'm also keeping a map of clients and its connections in a data structure similar to a hashtable. I? doing that because I need to recover a client Id from a connection and a connection from a client Id. This is can be shown in: clientsMap.getClient(key). But the problem itself resides in the method processIncomingDataFromNewClient(key). There, I simply read the data that the client sent to me, validate it, and if it's ok, I send a message back to the client to tell that it is connected to the chat server. Here is a similar piece of code: private boolean processIncomingDataFromNewClient(SelectionKey key){ SocketChannel sc = (SocketChannel) key.channel(); String connectionOrigin = sc.socket().getInetAddress() + ":" + sc.socket().getPort(); int id = 0; //id of the client buf.clear(); int bytesRead = 0; try { bytesRead = sc.read(buf); if(bytesRead<=0){ System.out.println("Conexão fechada pelo: " + connectionOrigin); return false; } System.out.println("LOG: " + bytesRead + " bytes lidos de " + connectionOrigin); String msg = new String(buf.array(),0,bytesRead); // Do validations with the client sent me here // gets the client id }catch (Exception e) { e.printStackTrace(); System.out.println("LOG: Oops. Cliente não conhece o protocolo. Fechando a conexão: " + connectionOrigin); System.out.println("LOG: Primeiros 10 caracteres enviados pelo cliente: " + msg); return false; } } } catch (IOException e) { System.out.println("LOG: Erro ao ler dados da conexao: " + connectionOrigin); System.out.println("LOG: "+ e.getLocalizedMessage()); System.out.println("LOG: Fechando a conexão..."); return false; } // If it gets to here, the protocol is ok and we can add the client boolean inserted = clientsMap.addClient(key, id); if(!inserted){ System.out.println("LOG: Não foi possível adicionar o cliente. Ou ele já está conectado ou já têm clientes demais. Id: " + id); System.out.println("LOG: Fechando a conexão: " + connectionOrigin); return false; } System.out.println("LOG: Novo cliente conectado! Enviando mesnsagem de confirmação. Id: " + id + " Conexao: " + connectionOrigin); /* Here is the error */ sendMessage(id, "Servidor pet: connection accepted"); System.out.println("LOG: Novo cliente conectado! Id: " + id + " Conexao: " + connectionOrigin); return true; } And finally, the method sendMessage(SelectionKey key) looks like this: private void sendMessage(int destId, String msg) { Charset charset = Charset.forName("ISO-8859-1"); CharBuffer charBuffer = CharBuffer.wrap(msg, 0, msg.length()); ByteBuffer bf = charset.encode(charBuffer); //bf.flip(); int bytesSent = 0; SelectionKey key = clientsMap.getClient(destId); SocketChannel sc = (SocketChannel) key.channel(); try { / int total_bytes_sent = 0; while(total_bytes_sent < msg.length()){ bytesSent = sc.write(bf); total_bytes_sent += bytesSent; } System.out.println("LOG: Bytes enviados para o cliente " + destId + ": "+ total_bytes_sent + " Tamanho da mensagem: " + msg.length()); } catch (IOException e) { System.out.println("LOG: Erro ao mandar mensagem para: " + destId); System.out.println("LOG: " + e.getLocalizedMessage()); } } So, what is happening is that the server, when send a message, prints something like this: LOG: Bytes sent to the client: 28 Size of the message: 28 So, it tells that it sent the data, but the chat client keeps blocking, waiting in the recv() method. So, the data never gets to it. When I close the server application, though, all the data appears in the client. I wonder why. It is important to say that the client is in C and the server JAVA, and I'm running both in the same machine, an Ubuntu Guest in virtualbox under windows. I also run both under windows host and under linuxes hosts, and keep getting the same strange problem. I'm sorry for the great lenght of this question, but I already searched a lot of places for an answer, found a lot of tutorials and questions, including here at StackOverflow, but coundn't find a reasonable explanation. I am really not liking this Java NIO, and i saw a lot of people complaining about it too. I am thinking that if I had done that in C it would have been a lot easier :-D So, if someone could help me and even discuss this behavor, it would be great! :-) Thanks everybody in advance, Péterson

    Read the article

  • Adding Polynomials (Linked Lists)......Bug Help

    - by Brian
    I have written a program that creates nodes that in this class are parts of polynomials and then the two polynomials get added together to become one polynomial (list of nodes). All my code compiles so the only problem I am having is that the nodes are not inserting into the polynomial via the insert method I have in polynomial.java and when running the program it does create nodes and displays them in the 2x^2 format but when it comes to add the polynomials together it displays o as the polynomials, so if anyone can figure out whats wrong and what I can do to fix it it would be much appreciated. Here is the code: import java.util.Scanner; class Polynomial{ public termNode head; public Polynomial() { head = null; } public boolean isEmpty() { return (head == null); } public void display() { if (head == null) System.out.print("0"); else for(termNode cur = head; cur != null; cur = cur.getNext()) { System.out.println(cur); } } public void insert(termNode newNode) { termNode prev = null; termNode cur = head; while (cur!=null && (newNode.compareTo(cur)<0)) { prev = null; cur = cur.getNext(); } if (prev == null) { newNode.setNext(head); head = newNode; } else { newNode.setNext(cur); prev.setNext(newNode); } } public void readPolynomial(Scanner kb) { boolean done = false; double coefficient; int exponent; termNode term; head = null; //UNLINK ANY PREVIOUS POLYNOMIAL System.out.println("Enter 0 and 0 to end."); System.out.print("coefficient: "); coefficient = kb.nextDouble(); System.out.println(coefficient); System.out.print("exponent: "); exponent = kb.nextInt(); System.out.println(exponent); done = (coefficient == 0 && exponent == 0); while(!done) { Polynomial poly = new Polynomial(); term = new termNode(coefficient,exponent); System.out.println(term); poly.insert(term); System.out.println("Enter 0 and 0 to end."); System.out.print("coefficient: "); coefficient = kb.nextDouble(); System.out.println(coefficient); System.out.print("exponent: "); exponent = kb.nextInt(); System.out.println(exponent); done = (coefficient==0 && exponent==0); } } public static Polynomial add(Polynomial p, Polynomial q) { Polynomial r = new Polynomial(); double coefficient; int exponent; termNode first = p.head; termNode second = q.head; termNode sum = r.head; termNode term; while (first != null && second != null) { if (first.getExp() == second.getExp()) { if (first.getCoeff() != 0 && second.getCoeff() != 0); { double addCoeff = first.getCoeff() + second.getCoeff(); term = new termNode(addCoeff,first.getExp()); sum.setNext(term); first.getNext(); second.getNext(); } } else if (first.getExp() < second.getExp()) { sum.setNext(second); term = new termNode(second.getCoeff(),second.getExp()); sum.setNext(term); second.getNext(); } else { sum.setNext(first); term = new termNode(first.getNext()); sum.setNext(term); first.getNext(); } } while (first != null) { sum.setNext(first); } while (second != null) { sum.setNext(second); } return r; } } Here is my Node class: class termNode implements Comparable { private int exp; private double coeff; private termNode next; public termNode(double coefficient, int exponent) { coeff = coefficient; exp = exponent; next = null; } public termNode(termNode inTermNode) { coeff = inTermNode.coeff; exp = inTermNode.exp; } public void setData(double coefficient, int exponent) { coefficient = coeff; exponent = exp; } public double getCoeff() { return coeff; } public int getExp() { return exp; } public void setNext(termNode link) { next = link; } public termNode getNext() { return next; } public String toString() { if (exp == 0) { return(coeff + " "); } else if (exp == 1) { return(coeff + "x"); } else { return(coeff + "x^" + exp); } } public int compareTo(Object other) { if(exp ==((termNode) other).exp) return 0; else if(exp < ((termNode) other).exp) return -1; else return 1; } } And here is my Test class to run the program. import java.util.Scanner; class PolyTest{ public static void main(String [] args) { Scanner kb = new Scanner(System.in); Polynomial r; Polynomial p = new Polynomial(); System.out.println("Enter first polynomial."); p.readPolynomial(kb); Polynomial q = new Polynomial(); System.out.println(); System.out.println("Enter second polynomial."); q.readPolynomial(kb); r = Polynomial.add(p,q); System.out.println(); System.out.print("The sum of "); p.display(); System.out.print(" and "); q.display(); System.out.print(" is "); r.display(); } }

    Read the article

  • asp.net grid view hyper link pass values to new window

    - by srihari
    hai guys here is my question please help me I have a gridview with hyperlink fields here my requirement is if I click on hyperlink of particular row I need to display that particular row values into other page in that page user will edit record values after that if he clicks on update button I need to update that record values and get back to previous home page. if iam clicking hyper link in first window new window should open with out any url tool bar and minimize close buttons like popup modular Ajax ModalPopUpExtender but iam un able to ge that that one please help me the fillowing code is defalut.aspx <head runat="server"> <title>PassGridviewRow values </title> <style type="text/css"> #gvrecords tr.rowHover:hover { background-color:Yellow; font-family:Arial; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:GridView runat="server" ID="gvrecords" AutoGenerateColumns="false" HeaderStyle-BackColor="#7779AF" HeaderStyle-ForeColor="White" DataKeyNames="UserId" RowStyle-CssClass="rowHover"> <Columns> <asp:TemplateField HeaderText="Change Password" > <ItemTemplate> <a href ='<%#"UpdateGridviewvalues.aspx?UserId="+DataBinder.Eval(Container.DataItem,"UserId") %>'> <%#Eval("UserName") %> </a> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="FirstName" HeaderText="FirstName" /> <asp:BoundField DataField="LastName" HeaderText="LastName" /> <asp:BoundField DataField="Email" HeaderText="Email" /> </Columns> </asp:GridView> </div> </form> </body> </html> following code default.aspx.cs code using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindGridview(); } } protected void BindGridview() { SqlConnection con = new SqlConnection("Data Source=.;Integrated Security=SSPI;Initial Catalog=testdb1"); con.Open(); SqlCommand cmd = new SqlCommand("select * from UserDetails", con); SqlDataAdapter da = new SqlDataAdapter(cmd); cmd.ExecuteNonQuery(); con.Close(); DataSet ds = new DataSet(); da.Fill(ds); gvrecords.DataSource = ds; gvrecords.DataBind(); } } this is updategridviewvalues.aspx <head runat="server"> <title>Update Gridview Row Values</title> <script type="text/javascript"> function Showalert(username) { alert(username + ' details updated successfully.'); if (alert) { window.location = 'Default.aspx'; } } </script> </head> <body> <form id="form1" runat="server"> <div> <table> <tr> <td colspan="2" align="center"> <b> Edit User Details</b> </td> </tr> <tr> <td> User Name: </td> <td> <asp:Label ID="lblUsername" runat="server"/> </td> </tr> <tr> <td> First Name: </td> <td> <asp:TextBox ID="txtfname" runat="server"></asp:TextBox> </td> </tr> <tr> <td> Last Name: </td> <td> <asp:TextBox ID="txtlname" runat="server"></asp:TextBox> </td> </tr> <tr> <td> Email: </td> <td> <asp:TextBox ID="txtemail" runat="server"></asp:TextBox> </td> </tr> <tr> <td> </td> <td> <asp:Button ID="btnUpdate" runat="server" Text="Update" onclick="btnUpdate_Click" /> <asp:Button ID="btnCancel" runat="server" Text="Cancel" onclick="btnCancel_Click"/> </td> </tr> </table> </div> </form> </body> </html> and my updategridviewvalues.aspx.cs code is follows using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class UpdateGridviewvalues : System.Web.UI.Page { SqlConnection con = new SqlConnection("Data Source=.;Integrated Security=SSPI;Initial Catalog=testdb1"); private int userid=0; protected void Page_Load(object sender, EventArgs e) { userid = Convert.ToInt32(Request.QueryString["UserId"].ToString()); if(!IsPostBack) { BindControlvalues(); } } private void BindControlvalues() { con.Open(); SqlCommand cmd = new SqlCommand("select * from UserDetails where UserId=" + userid, con); SqlDataAdapter da = new SqlDataAdapter(cmd); cmd.ExecuteNonQuery(); con.Close(); DataSet ds = new DataSet(); da.Fill(ds); lblUsername.Text = ds.Tables[0].Rows[0][1].ToString(); txtfname.Text = ds.Tables[0].Rows[0][2].ToString(); txtlname.Text = ds.Tables[0].Rows[0][3].ToString(); txtemail.Text = ds.Tables[0].Rows[0][4].ToString(); } protected void btnUpdate_Click(object sender, EventArgs e) { con.Open(); SqlCommand cmd = new SqlCommand("update UserDetails set FirstName='" + txtfname.Text + "',LastName='" + txtlname.Text + "',Email='" + txtemail.Text + "' where UserId=" + userid, con); SqlDataAdapter da = new SqlDataAdapter(cmd); int result= cmd.ExecuteNonQuery(); con.Close(); if(result==1) { ScriptManager.RegisterStartupScript(this, this.GetType(), "ShowSuccess", "javascript:Showalert('"+lblUsername.Text+"')", true); } } protected void btnCancel_Click(object sender, EventArgs e) { Response.Redirect("~/Default.aspx"); } } My requirement is new window should come like pop window as new window with out having url and close button my database tables is ColumnName DataType ------------------------------------------- UserId Int(set identity property=true) UserName varchar(50) FirstName varchar(50) LastName varchar(50) Email Varchar(50)

    Read the article

  • Different behavior of functors (copies, assignments) in VS2010 (compared with VS2005)

    - by Patrick
    When moving from VS2005 to VS2010 we noticed a performance decrease, which seemed to be caused by additional copies of a functor. The following code illustrates the problem. It is essential to have a map where the value itself is a set. On both the map and the set we defined a comparison functor (which is templated in the example). #include <iostream> #include <map> #include <set> class A { public: A(int i, char c) : m_i(i), m_c(c) { std::cout << "Construct object " << m_c << m_i << std::endl; } A(const A &a) : m_i(a.m_i), m_c(a.m_c) { std::cout << "Copy object " << m_c << m_i << std::endl; } ~A() { std::cout << "Destruct object " << m_c << m_i << std::endl; } void operator= (const A &a) { m_i = a.m_i; m_c = a.m_c; std::cout << "Assign object " << m_c << m_i << std::endl; } int m_i; char m_c; }; class B : public A { public: B(int i) : A(i, 'B') { } static const char s_c = 'B'; }; class C : public A { public: C(int i) : A(i, 'C') { } static const char s_c = 'C'; }; template <class X> class compareA { public: compareA() : m_i(999) { std::cout << "Construct functor " << X::s_c << m_i << std::endl; } compareA(const compareA &a) : m_i(a.m_i) { std::cout << "Copy functor " << X::s_c << m_i << std::endl; } ~compareA() { std::cout << "Destruct functor " << X::s_c << m_i << std::endl; } void operator= (const compareA &a) { m_i = a.m_i; std::cout << "Assign functor " << X::s_c << m_i << std::endl; } bool operator() (const X &x1, const X &x2) const { std::cout << "Comparing object " << x1.m_i << " with " << x2.m_i << std::endl; return x1.m_i < x2.m_i; } private: int m_i; }; typedef std::set<C, compareA<C> > SetTest; typedef std::map<B, SetTest, compareA<B> > MapTest; int main() { int i = 0; std::cout << "--- " << i++ << std::endl; MapTest mapTest; std::cout << "--- " << i++ << std::endl; SetTest &setTest = mapTest[0]; std::cout << "--- " << i++ << std::endl; } If I compile this code with VS2005 I get the following output: --- 0 Construct functor B999 Copy functor B999 Copy functor B999 Destruct functor B999 Destruct functor B999 --- 1 Construct object B0 Construct functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Destruct functor C999 Destruct object B0 Destruct functor C999 Destruct object B0 --- 2 If I compile this with VS2010, I get the following output: --- 0 Construct functor B999 Copy functor B999 Copy functor B999 Destruct functor B999 Destruct functor B999 --- 1 Construct object B0 Construct functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy functor C999 Assign functor C999 Assign functor C999 Destruct functor C999 Copy object B0 Copy functor C999 Copy functor C999 Copy functor C999 Destruct functor C999 Destruct functor C999 Copy functor C999 Assign functor C999 Assign functor C999 Destruct functor C999 Destruct functor C999 Destruct object B0 Destruct functor C999 Destruct object B0 --- 2 The output for the first statement (constructing the map) is identical. The output for the second statement (creating the first element in the map and getting a reference to it), is much bigger in the VS2010 case: Copy constructor of functor: 10 times vs 8 times Assignment of functor: 2 times vs. 0 times Destructor of functor: 10 times vs 8 times My questions are: Why does the STL copy a functor? Isn't it enough to construct it once for every instantiation of the set? Why is the functor constructed more in the VS2010 case than in the VS2005 case? (didn't check VS2008) And why is it assigned two times in VS2010 and not in VS2005? Are there any tricks to avoid the copy of functors? I saw a similar question at http://stackoverflow.com/questions/2216041/prevent-unnecessary-copies-of-c-functor-objects but I'm not sure that's the same question. Thanks in advance, Patrick

    Read the article

  • Jquery Session & Table Filtering

    - by Bry4n
    This is my Jquery <script type="text/javascript"> $(function() { var from = $.session("from"); var to = $.session("to"); var $th = $('#theTable').find('th'); // had to add the classes here to not grab the "td" inside those tables var $td = $('#theTable').find('td.bluedata,td.yellowdata'); $th.hide(); $td.hide(); if (to == "Select" || from == "Select") { // shortcut - nothing set, show everything $th.add($td).show(); return; } var filterArray = new Array(); filterArray[0] = to; filterArray[1] = from; $.each(filterArray, function(i){ if (filterArray[i].toString() == "Select") { filterArray[i] = ""; } }); $($th).each(function(){ if ($( this,":eq(0):contains('" + filterArray[0].toString() + "')") != null && $(this,":eq(1):contains('" + filterArray[1].toString() + "')") != null) { $(this).show(); } }); $($td).each(function(){ if ($( this,":eq(0):contains('" + filterArray[0].toString() + "')") != null && $(this,":eq(1):contains('" + filterArray[1].toString() + "')") != null) { $(this).show(); } }); }); </script> This is my table <table border="1" id="theTable"> <tr class="headers"> <th class="bluedata"height="20px" valign="top">63rd St. &amp; Malvern Av. Loop<BR/></th> <th class="yellowdata"height="20px" valign="top">52nd St. &amp; Lansdowne Av.<BR/></th> <th class="bluedata"height="20px" valign="top">Lancaster &amp; Girard Avs<BR/></th> <th class="yellowdata"height="20px" valign="top">40th St. &amp; Lancaster Av.<BR/></th> <th class="bluedata"height="20px" valign="top">36th &amp; Market Sts<BR/></th> <th class="bluedata"height="20px" valign="top">6th &amp; Market Sts<BR/></th> <th class="yellowdata"height="20px" valign="top">Juniper Station<BR/></th> </tr> <tr> <td class="bluedata"height="20px" title="63rd St. &amp; Malvern Av. Loop"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="yellowdata"height="20px" title="52nd St. &amp; Lansdowne Av."> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="Lancaster &amp; Girard Avs"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="yellowdata"height="20px" title="40th St. &amp; Lancaster Av."> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="36th &amp; Market Sts"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="6th &amp; Market Sts"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="Juniper Station"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> </tr> </table> I have asked questions on here before and I have had success in converting textbox values to dropdown changes. However this is a bit different. I am using the sessions plugin (which works fine). On one page I have a set of normal drop downs, on submit you get taken to a separate page which runs the function above, however the rows/columns all show and they don't seem to filter at all.

    Read the article

  • C sockets, chat server and client, problem echoing back.

    - by wretrOvian
    Hi This is my chat server : #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define LISTEN_Q 20 #define MSG_SIZE 1024 struct userlist { int sockfd; struct sockaddr addr; struct userlist *next; }; int main(int argc, char *argv[]) { // declare. int listFD, newFD, fdmax, i, j, bytesrecvd; char msg[MSG_SIZE], ipv4[INET_ADDRSTRLEN]; struct addrinfo hints, *srvrAI; struct sockaddr_storage newAddr; struct userlist *users, *uptr, *utemp; socklen_t newAddrLen; fd_set master_set, read_set; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // create a user list users = (struct userlist *)malloc(sizeof(struct userlist)); users->sockfd = -1; //users->addr = NULL; users->next = NULL; // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo("localhost", argv[1], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((listFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // bind socket bind(listFD, srvrAI->ai_addr, srvrAI->ai_addrlen); // listen on socket if(listen(listFD, LISTEN_Q) == -1) { perror("* ERROR | listen()\n"); exit(1); } // add listfd to master_set FD_SET(listFD, &master_set); // initialize fdmax fdmax = listFD; while(1) { // equate read_set = master_set; // run select if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()\n"); exit(1); } // query all sockets for(i = 0; i <= fdmax; i++) { if(FD_ISSET(i, &read_set)) { // found active sockfd if(i == listFD) { // new connection // accept newAddrLen = sizeof newAddr; if((newFD = accept(listFD, (struct sockaddr *)&newAddr, &newAddrLen)) == -1) { perror("* ERROR | select()\n"); exit(1); } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&newAddr)->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } fprintf(stdout, "* Client Connected | %s\n", ipv4); // add to master list FD_SET(newFD, &master_set); // create new userlist component utemp = (struct userlist*)malloc(sizeof(struct userlist)); utemp->next = NULL; utemp->sockfd = newFD; utemp->addr = *((struct sockaddr *)&newAddr); // iterate to last node for(uptr = users; uptr->next != NULL; uptr = uptr->next) { } // add uptr->next = utemp; // update fdmax if(newFD > fdmax) fdmax = newFD; } else { // existing sockfd transmitting data // read if((bytesrecvd = recv(i, msg, MSG_SIZE, 0)) == -1) { perror("* ERROR | recv()\n"); exit(1); } msg[bytesrecvd] = '\0'; // find out who sent? for(uptr = users; uptr->next != NULL; uptr = uptr->next) { if(i == uptr->sockfd) break; } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&(uptr->addr))->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } // print fprintf(stdout, "%s\n", msg); // send to all for(j = 0; j <= fdmax; j++) { if(FD_ISSET(j, &master_set)) { if(send(j, msg, strlen(msg), 0) == -1) perror("* ERROR | send()"); } } } // handle read from client } // end select result handle } // end looping fds } // end while return 0; } This is my client: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define MSG_SIZE 1024 int main(int argc, char *argv[]) { // declare. int newFD, bytesrecvd, fdmax; char msg[MSG_SIZE]; fd_set master_set, read_set; struct addrinfo hints, *srvrAI; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo(argv[1], argv[2], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((newFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // connect to server if(connect(newFD, srvrAI->ai_addr, srvrAI->ai_addrlen) == -1) { perror("* ERROR | connect()\n"); exit(1); } // add to master, and add keyboard FD_SET(newFD, &master_set); FD_SET(STDIN_FILENO, &master_set); // initialize fdmax if(newFD > STDIN_FILENO) fdmax = newFD; else fdmax = STDIN_FILENO; while(1) { // equate read_set = master_set; if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()"); exit(1); } // check server if(FD_ISSET(newFD, &read_set)) { // read data if((bytesrecvd = recv(newFD, msg, MSG_SIZE, 0)) < 0 ) { perror("* ERROR | recv()"); exit(1); } msg[bytesrecvd] = '\0'; // print fprintf(stdout, "%s\n", msg); } // check keyboard if(FD_ISSET(STDIN_FILENO, &read_set)) { // read data from stdin if((bytesrecvd = read(STDIN_FILENO, msg, MSG_SIZE)) < 0) { perror("* ERROR | read()"); exit(1); } msg[bytesrecvd] = '\0'; // send if((send(newFD, msg, bytesrecvd, 0)) == -1) { perror("* ERROR | send()"); exit(1); } } } return 0; } The problem is with the part where the server recv()s data from an FD, then tries echoing back to all [send() ]; it just dies, w/o errors, and my client is left looping :(

    Read the article

  • What would be a correct implemantation of JSF Converter if I need to get an Integer to run a query?

    - by Ignacio
    HI here's my code: List.xhmtl <h:selectOneMenu value="#{produtosController.items}"> <f:selectItems value="#{produtosController.itemsAvailableSelectOne}"/> </h:selectOneMenu> <h:commandButton action="#{produtosController.createByCodigos}" value="Buscar" /> My Controller Class with innner Converter implemantation @ManagedBean (name="produtosController") @SessionScoped public class ProdutosController { private Produtos current; private DataModel items = null; @EJB private controladores.ProdutosFacade ejbFacade; private PaginationHelper pagination; private int selectedItemIndex; public ProdutosController() { } public Produtos getSelected() { if (current == null) { current = new Produtos(); selectedItemIndex = -1; } return current; } private ProdutosFacade getFacade() { return ejbFacade; } public PaginationHelper getPagination() { if (pagination == null) { pagination = new PaginationHelper(10) { @Override public int getItemsCount() { return getFacade().count(); } @Override public DataModel createPageDataModel() { return new ListDataModel(getFacade().findRange(new int[]{getPageFirstItem(), getPageFirstItem()+getPageSize()})); } }; } return pagination; } public String prepareList() { recreateModel(); return "List"; } public String prepareView() { current = (Produtos)getItems().getRowData(); selectedItemIndex = pagination.getPageFirstItem() + getItems().getRowIndex(); return "View"; } public String prepareCreate() { current = new Produtos(); selectedItemIndex = -1; return "Create"; } public String create() { try { getFacade().create(current); JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("ProdutosCreated")); return prepareCreate(); } catch (Exception e) { JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured")); return null; } } public String createByMarcas() { items = new ListDataModel(ejbFacade.findByMarcas(current.getIdMarca())); updateCurrentItem(); return "List"; } public String createByModelos() { items = new ListDataModel(ejbFacade.findByModelos(current.getIdModelo())); updateCurrentItem(); return "List"; } public String createByCodigos(){ items = new ListDataModel(ejbFacade.findByCodigo(current.getCodigo())); updateCurrentItem(); return "List"; } public String prepareEdit() { current = (Produtos)getItems().getRowData(); selectedItemIndex = pagination.getPageFirstItem() + getItems().getRowIndex(); return "Edit"; } public String update() { try { getFacade().edit(current); JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("ProdutosUpdated")); return "View"; } catch (Exception e) { JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured")); return null; } } public String destroy() { current = (Produtos)getItems().getRowData(); selectedItemIndex = pagination.getPageFirstItem() + getItems().getRowIndex(); performDestroy(); recreateModel(); return "List"; } public String destroyAndView() { performDestroy(); recreateModel(); updateCurrentItem(); if (selectedItemIndex >= 0) { return "View"; } else { // all items were removed - go back to list recreateModel(); return "List"; } } private void performDestroy() { try { getFacade().remove(current); JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("ProdutosDeleted")); } catch (Exception e) { JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured")); } } private void updateCurrentItem() { int count = getFacade().count(); if (selectedItemIndex >= count) { // selected index cannot be bigger than number of items: selectedItemIndex = count-1; // go to previous page if last page disappeared: if (pagination.getPageFirstItem() >= count) { pagination.previousPage(); } } if (selectedItemIndex >= 0) { current = getFacade().findRange(new int[]{selectedItemIndex, selectedItemIndex+1}).get(0); } } public DataModel getItems() { if (items == null) { items = getPagination().createPageDataModel(); } return items; } private void recreateModel() { items = null; } public String next() { getPagination().nextPage(); recreateModel(); return "List"; } public String previous() { getPagination().previousPage(); recreateModel(); return "List"; } public SelectItem[] getItemsAvailableSelectMany() { return JsfUtil.getSelectItems(ejbFacade.findAll(), false); } public SelectItem[] getItemsAvailableSelectOne() { return JsfUtil.getSelectItems(ejbFacade.findAll(), true); } @FacesConverter(forClass=Produtos.class) public static class ProdutosControllerConverter implements Converter{ public Object getAsObject(FacesContext facesContext, UIComponent component, String value) { if (value == null || value.length() == 0) { return null; } ProdutosController controller = (ProdutosController)facesContext.getApplication().getELResolver(). getValue(facesContext.getELContext(), null, "produtosController"); return controller.ejbFacade.find(getKey(value)); } java.lang.Integer getKey(String value) { java.lang.Integer key; key = Integer.decode(value); return key; } String getStringKey(java.lang.Integer value) { StringBuffer sb = new StringBuffer(); sb.append(value); return sb.toString(); } public String getAsString(FacesContext facesContext, UIComponent component, Object object) { if (object == null) { return null; } if (object instanceof Produtos) { Produtos o = (Produtos) object; return getStringKey(o.getCodigo()); } else { throw new IllegalArgumentException("object " + object + " is of type " + object.getClass().getName() + "; expected type: "+ProdutosController.class.getName()); } } } } and my EJB @Entity @ViewScoped @Table(name = "produtos") @NamedQueries({ @NamedQuery(name = "Produtos.findAll", query = "SELECT p FROM Produtos p"), @NamedQuery(name = "Produtos.findById", query = "SELECT p FROM Produtos p WHERE p.id = :id"), @NamedQuery(name = "Produtos.findByCodigo", query = "SELECT p FROM Produtos p WHERE p.codigo = :codigo"), @NamedQuery(name = "Produtos.findByDescripcion", query = "SELECT p FROM Produtos p WHERE p.descripcion = :descripcion"), @NamedQuery(name = "Produtos.findByImagen", query = "SELECT p FROM Produtos p WHERE p.imagen = :imagen"), @NamedQuery(name = "Produtos.findByMarcas", query="SELECT m FROM Produtos m WHERE m.idMarca.id = :idMarca"), @NamedQuery(name = "Produtos.findByModelos", query="SELECT m FROM Produtos m WHERE m.idModelo.id = :idModelo")}) public class Produtos implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Basic(optional = false) @Column(name = "id") private Integer id; @Column(name = "codigo") private Integer codigo; @Column(name = "descripcion") private String descripcion; @Column(name = "imagen") private String imagen; @JoinColumn(name = "id_modelo", referencedColumnName = "id") @ManyToOne(optional = false) private Modelos idModelo; @JoinColumn(name = "id_marca", referencedColumnName = "id") @ManyToOne(optional = false) private Marcas idMarca; public Produtos() { } public Produtos(Integer id) { this.id = id; } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public Integer getCodigo() { return codigo; } public void setCodigo(Integer codigo) { this.codigo = codigo; } public String getDescripcion() { return descripcion; } public void setDescripcion(String descripcion) { this.descripcion = descripcion; } public String getImagen() { return imagen; } public void setImagen(String imagen) { this.imagen = imagen; } public Modelos getIdModelo() { return idModelo; } public void setIdModelo(Modelos idModelo) { this.idModelo = idModelo; } public Marcas getIdMarca() { return idMarca; } public void setIdMarca(Marcas idMarca) { this.idMarca = idMarca; } @Override public int hashCode() { int hash = 0; hash += (id != null ? id.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Produtos)) { return false; } Produtos other = (Produtos) object; if ((this.id == null && other.id != null) || (this.id != null && !this.id.equals(other.id))) { return false; } return true; } @Override public String toString() { return "" + codigo + ""; } }

    Read the article

  • Flash AS3 Mysterious Blinking MovieClip

    - by Ben
    This is the strangest problem I've faced in flash so far. I have no idea what's causing it. I can provide a .swf if someone wants to actually see it, but I'll describe it as best I can. I'm creating bullets for a tank object to shoot. The tank is a child of the document class. The way I am creating the bullet is: var bullet:Bullet = new Bullet(); (parent as MovieClip).addChild(bullet); The bullet itself simply moves itself in a direction using code like this.x += 5; The problem is the bullets will trace for their creation and destruction at the correct times, however the bullet is sometimes not visible until half way across the screen, sometimes not at all, and sometimes for the whole traversal. Oddly removing the timer I have on bullet creation seems to solve this. The timer is implemented as such: if(shot_timer == 0) { shoot(); // This contains the aforementioned bullet creation method shot_timer = 10; My enter frame handler for the tank object controls the timer and decrements it every frame if it is greater than zero. Can anyone suggest why this could be happening? EDIT: As requested, full code: Bullet.as package { import flash.display.MovieClip; import flash.events.Event; public class Bullet extends MovieClip { public var facing:int; private var speed:int; public function Bullet():void { trace("created"); speed = 10; addEventListener(Event.ADDED_TO_STAGE,addedHandler); } private function addedHandler(e:Event):void { addEventListener(Event.ENTER_FRAME,enterFrameHandler); removeEventListener(Event.ADDED_TO_STAGE,addedHandler); } private function enterFrameHandler(e:Event):void { //0 - up, 1 - left, 2 - down, 3 - right if(this.x > 720 || this.x < 0 || this.y < 0 || this.y > 480) { removeEventListener(Event.ENTER_FRAME,enterFrameHandler); trace("destroyed"); (parent as MovieClip).removeChild(this); return; } switch(facing) { case 0: this.y -= speed; break; case 1: this.x -= speed; break; case 2: this.y += speed; break; case 3: this.x += speed; break; } } } } Tank.as: package { import flash.display.MovieClip; import flash.events.KeyboardEvent; import flash.events.Event; import flash.ui.Keyboard; public class Tank extends MovieClip { private var right:Boolean = false; private var left:Boolean = false; private var up:Boolean = false; private var down:Boolean = false; private var facing:int = 0; //0 - up, 1 - left, 2 - down, 3 - right private var horAllowed:Boolean = true; private var vertAllowed:Boolean = true; private const GRID_SIZE:int = 100; private var shooting:Boolean = false; private var shot_timer:int = 0; private var speed:int = 2; public function Tank():void { addEventListener(Event.ADDED_TO_STAGE,stageAddHandler); addEventListener(Event.ENTER_FRAME, enterFrameHandler); } private function stageAddHandler(e:Event):void { stage.addEventListener(KeyboardEvent.KEY_DOWN,checkKeys); stage.addEventListener(KeyboardEvent.KEY_UP,keyUps); removeEventListener(Event.ADDED_TO_STAGE,stageAddHandler); } public function checkKeys(event:KeyboardEvent):void { if(event.keyCode == 32) { //trace("Spacebar is down"); shooting = true; } if(event.keyCode == 39) { //trace("Right key is down"); right = true; } if(event.keyCode == 38) { //trace("Up key is down"); // lol up = true; } if(event.keyCode == 37) { //trace("Left key is down"); left = true; } if(event.keyCode == 40) { //trace("Down key is down"); down = true; } } public function keyUps(event:KeyboardEvent):void { if(event.keyCode == 32) { event.keyCode = 0; shooting = false; //trace("Spacebar is not down"); } if(event.keyCode == 39) { event.keyCode = 0; right = false; //trace("Right key is not down"); } if(event.keyCode == 38) { event.keyCode = 0; up = false; //trace("Up key is not down"); } if(event.keyCode == 37) { event.keyCode = 0; left = false; //trace("Left key is not down"); } if(event.keyCode == 40) { event.keyCode = 0; down = false; //trace("Down key is not down") // O.o } } public function checkDirectionPermissions(): void { if(this.y % GRID_SIZE < 5 || GRID_SIZE - this.y % GRID_SIZE < 5) { horAllowed = true; } else { horAllowed = false; } if(this.x % GRID_SIZE < 5 || GRID_SIZE - this.x % GRID_SIZE < 5) { vertAllowed = true; } else { vertAllowed = false; } if(!horAllowed && !vertAllowed) { realign(); } } public function realign():void { if(!horAllowed) { if(this.x % GRID_SIZE < GRID_SIZE / 2) { this.x -= this.x % GRID_SIZE; } else { this.x += (GRID_SIZE - this.x % GRID_SIZE); } } if(!vertAllowed) { if(this.y % GRID_SIZE < GRID_SIZE / 2) { this.y -= this.y % GRID_SIZE; } else { this.y += (GRID_SIZE - this.y % GRID_SIZE); } } } public function enterFrameHandler(Event):void { //trace(shot_timer); if(shot_timer > 0) { shot_timer--; } movement(); firing(); } public function firing():void { if(shooting) { if(shot_timer == 0) { shoot(); shot_timer = 10; } } } public function shoot():void { var bullet = new Bullet(); bullet.facing = facing; //0 - up, 1 - left, 2 - down, 3 - right switch(facing) { case 0: bullet.x = this.x; bullet.y = this.y - this.height / 2; break; case 1: bullet.x = this.x - this.width / 2; bullet.y = this.y; break; case 2: bullet.x = this.x; bullet.y = this.y + this.height / 2; break; case 3: bullet.x = this.x + this.width / 2; bullet.y = this.y; break; } (parent as MovieClip).addChild(bullet); } public function movement():void { //0 - up, 1 - left, 2 - down, 3 - right checkDirectionPermissions(); if(horAllowed) { if(right) { orient(3); realign(); this.x += speed; } if(left) { orient(1); realign(); this.x -= speed; } } if(vertAllowed) { if(up) { orient(0); realign(); this.y -= speed; } if(down) { orient(2); realign(); this.y += speed; } } } public function orient(dest:int):void { //trace("facing: " + facing); //trace("dest: " + dest); var angle = facing - dest; this.rotation += (90 * angle); facing = dest; } } }

    Read the article

  • On screen orientation loads again data with Async Task

    - by Zookey
    I make Android application with master/detail pattern. So I have ListActivity class which is FragmentActivity and ListFragment class which is Fragment It all works perfect, but when I change screen orientation it calls again AsyncTask and reload all data. Here is the code for ListActivity class where I handle all logic: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_list); getActionBar().setDisplayHomeAsUpEnabled(true); getActionBar().setHomeButtonEnabled(true); getActionBar().setTitle("Dnevni horoskop"); if(findViewById(R.id.details_container) != null){ //Tablet mTwoPane = true; //Fragment stuff FragmentManager fm = getSupportFragmentManager(); FragmentTransaction ft = fm.beginTransaction(); DetailsFragment df = new DetailsFragment(); ft.add(R.id.details_container, df); ft.commit(); } pb = (ProgressBar) findViewById(R.id.pb_list); tvNoConnection = (TextView) findViewById(R.id.tv_no_internet); ivNoConnection = (ImageView) findViewById(R.id.iv_no_connection); list = (GridView) findViewById(R.id.gv_list); if(mTwoPane == true){ list.setNumColumns(1); //list.setPadding(16,16,16,16); } adapter = new CustomListAdapter(); list.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int position, long arg3) { pos = position; if(mTwoPane == false){ Bundle bundle = new Bundle(); bundle.putSerializable("zodiac", zodiacFeed); Intent i = new Intent(getApplicationContext(), DetailsActivity.class); i.putExtra("position", position); i.putExtras(bundle); startActivity(i); overridePendingTransition(R.anim.right_in, R.anim.right_out); } else if(mTwoPane == true){ DetailsFragment fragment = (DetailsFragment) getSupportFragmentManager().findFragmentById(R.id.details_container); fragment.setHoroscopeText(zodiacFeed.getItem(position).getText()); fragment.setLargeImage(zodiacFeed.getItem(position).getLargeImage()); fragment.setSign("Dnevni horoskop - "+zodiacFeed.getItem(position).getName()); fragment.setSignDuration(zodiacFeed.getItem(position).getDuration()); // inflate menu from xml /*if(menu != null){ MenuItem item = menu.findItem(R.id.share); Toast.makeText(getApplicationContext(), item.getTitle().toString(), Toast.LENGTH_SHORT).show(); }*/ } } }); if(!Utils.isConnected(getApplicationContext())){ pb.setVisibility(View.GONE); tvNoConnection.setVisibility(View.VISIBLE); ivNoConnection.setVisibility(View.VISIBLE); } //Calling AsyncTask to load data Log.d("TAG", "loading"); HoroscopeAsyncTask task = new HoroscopeAsyncTask(pb); task.execute(); } @Override public void onConfigurationChanged(Configuration newConfig) { // TODO Auto-generated method stub super.onConfigurationChanged(newConfig); } class CustomListAdapter extends BaseAdapter { private LayoutInflater layoutInflater; public CustomListAdapter() { layoutInflater = (LayoutInflater) getBaseContext().getSystemService( Context.LAYOUT_INFLATER_SERVICE); } public int getCount() { // TODO Auto-generated method stub // Set the total list item count return names.length; } public Object getItem(int arg0) { // TODO Auto-generated method stub return null; } public long getItemId(int arg0) { // TODO Auto-generated method stub return 0; } public View getView(int position, View convertView, ViewGroup parent) { // Inflate the item layout and set the views View listItem = convertView; int pos = position; zodiacItem = zodiacList.get(pos); if (listItem == null && mTwoPane == false) { listItem = layoutInflater.inflate(R.layout.list_item, null); } else if(mTwoPane == true){ listItem = layoutInflater.inflate(R.layout.tablet_list_item, null); } // Initialize the views in the layout ImageView iv = (ImageView) listItem.findViewById(R.id.iv_horoscope); iv.setScaleType(ScaleType.CENTER_CROP); TextView tvName = (TextView) listItem.findViewById(R.id.tv_zodiac_name); TextView tvDuration = (TextView) listItem.findViewById(R.id.tv_duration); iv.setImageResource(zodiacItem.getImage()); tvName.setText(zodiacItem.getName()); tvDuration.setText(zodiacItem.getDuration()); Animation animation = AnimationUtils.loadAnimation(getBaseContext(), R.anim.push_up); listItem.startAnimation(animation); animation = null; return listItem; } } private void getHoroscope() { String urlString = "http://balkanandroid.com/download/horoskop/examples/dnevnihoroskop.php"; try { HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(urlString); HttpResponse response = client.execute(post); resEntity = response.getEntity(); response_str = EntityUtils.toString(resEntity); if (resEntity != null) { Log.i("RESPONSE", response_str); runOnUiThread(new Runnable() { public void run() { try { Log.d("TAG", "Response from server : n " + response_str); } catch (Exception e) { e.printStackTrace(); } } }); } } catch (Exception ex) { Log.e("TAG", "error: " + ex.getMessage(), ex); } } private class HoroscopeAsyncTask extends AsyncTask<String, Void, Void> { public HoroscopeAsyncTask(ProgressBar pb1){ pb = pb1; } @Override protected void onPreExecute() { pb.setVisibility(View.VISIBLE); super.onPreExecute(); } @Override protected Void doInBackground(String... params) { getHoroscope(); try { Log.d("TAG", "test u try"); JSONObject jsonObject = new JSONObject(response_str); JSONArray jsonArray = jsonObject.getJSONArray("horoscope"); for(int i=0;i<jsonArray.length();i++){ Log.d("TAG", "test u for"); JSONObject horoscopeObj = jsonArray.getJSONObject(i); String horoscopeSign = horoscopeObj.getString("name_sign"); String horoscopeText = horoscopeObj.getString("txt_hrs"); zodiacItem = new ZodiacItem(horoscopeSign, horoscopeText, duration[i], images[i], largeImages[i]); zodiacList.add(zodiacItem); zodiacFeed.addItem(zodiacItem); //Treba u POJO klasu ubaciti sve. Log.d("TAG", "ZNAK: "+zodiacItem.getName()+" HOROSKOP: "+zodiacItem.getText()); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); Log.e("TAG", "error: " + e.getMessage(), e); } return null; } @Override protected void onPostExecute(Void result) { pb.setVisibility(View.GONE); list.setAdapter(adapter); adapter.notifyDataSetChanged(); super.onPostExecute(result); } } Here is the code for ListFragment class: public class ListFragment extends Fragment { @Override public void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub // Retain this fragment across configuration changes. setRetainInstance(true); super.onCreate(savedInstanceState); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // TODO Auto-generated method stub View view = inflater.inflate(R.layout.fragment_list, container, false); return view; } }

    Read the article

  • File using .net sockets, transferring problem

    - by Sergei
    I have a client and server, client sending file to server. When i transfer files on my computer(in local) everything is ok(try to sen file over 700mb). When i try to sent file use Internet to my friend in the end of sending appears error on server "Input string is not in correct format".This error appears in this expression fSize = Convert::ToUInt64(tokenes[0]); - and i don't mind wht it's appear. File should be transfered and wait other transferring ps: sorry for too much code, but i want to find solution private: void CreateServer() { try{ IPAddress ^ipAddres = IPAddress::Parse(ipAdress); listener = gcnew System::Net::Sockets::TcpListener(ipAddres, port); listener->Start(); clientsocket =listener->AcceptSocket(); bool keepalive = true; array<wchar_t,1> ^split = gcnew array<wchar_t>(1){ '\0' }; array<wchar_t,1> ^split2 = gcnew array<wchar_t>(1){ '|' }; statusBar1->Text = "Connected" ; // while (keepalive) { array<Byte>^ size1 = gcnew array<Byte>(1024); clientsocket->Receive(size1); System::String ^notSplited = System::Text::Encoding::GetEncoding(1251)->GetString(size1); array<String^> ^ tokenes = notSplited->Split(split2); System::String ^fileName = tokenes[1]->ToString(); statusBar1->Text = "Receiving file" ; unsigned long fSize = 0; //IN THIS EXPRESSIN APPEARS ERROR fSize = Convert::ToUInt64(tokenes[0]); if (!Directory::Exists("Received")) Directory::CreateDirectory("Received"); System::String ^path = "Received\\"+ fileName; while (File::Exists(path)) { int dotPos = path->LastIndexOf('.'); if (dotPos == -1) { path += "[1]"; } else { path = path->Insert(dotPos, "[1]"); } } FileStream ^fs = gcnew FileStream(path, FileMode::CreateNew, FileAccess::Write); BinaryWriter ^f = gcnew BinaryWriter(fs); //bytes received unsigned long processed = 0; pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; // Set the initial value of the ProgressBar. pBarFilesTr->Value = 0; pBarFilesTr->Step = 1024; //loop for receive file array<Byte>^ buffer = gcnew array<Byte>(1024); while (processed < fSize) { if ((fSize - processed) < 1024) { int bytes ; array<Byte>^ buf = gcnew array<Byte>(1024); bytes = clientsocket->Receive(buf); if (bytes != 0) { f->Write(buf, 0, bytes); processed = processed + (unsigned long)bytes; pBarFilesTr->PerformStep(); } break; } else { int bytes = clientsocket->Receive(buffer); if (bytes != 0) { f->Write(buffer, 0, 1024); processed = processed + 1024; pBarFilesTr->PerformStep(); } else break; } } statusBar1->Text = "File was received" ; array<Byte>^ buf = gcnew array<Byte>(1); clientsocket->Send(buf,buf->Length,SocketFlags::None); f->Close(); fs->Close(); SystemSounds::Beep->Play(); } }catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } catch(System::Exception ^es) { MessageBox::Show(es->ToString()); } } private: void CreateClient() { clientsock = gcnew System::Net::Sockets::TcpClient(ipAdress, port); ns = clientsock->GetStream(); sr = gcnew StreamReader(ns); statusBar1->Text = "Connected" ; } private:void Send() { try{ OpenFileDialog ^openFileDialog1 = gcnew OpenFileDialog(); System::String ^filePath = ""; System::String ^fileName = ""; //file choose dialog if (openFileDialog1->ShowDialog() == System::Windows::Forms::DialogResult::OK) { filePath = openFileDialog1->FileName; fileName = openFileDialog1->SafeFileName; } else { MessageBox::Show("You must select a file", "Error", MessageBoxButtons::OK, MessageBoxIcon::Exclamation); return; } statusBar1->Text = "Sending file" ; NetworkStream ^writerStream = clientsock->GetStream(); System::Runtime::Serialization::Formatters::Binary::BinaryFormatter ^format = gcnew System::Runtime::Serialization::Formatters::Binary::BinaryFormatter(); array<Byte>^ buffer = gcnew array<Byte>(1024); FileStream ^fs = gcnew FileStream(filePath, FileMode::Open); BinaryReader ^br = gcnew BinaryReader(fs); //file size unsigned long fSize = (unsigned long)fs->Length; //transfer file size + name bFSize = Encoding::GetEncoding(1251)->GetBytes(Convert::ToString(fs->Length+"|"+fileName+"|")); writerStream->Write(bFSize, 0, bFSize->Length); //status bar pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; pBarFilesTr->Value = 0; // Set the initial value of the ProgressBar. pBarFilesTr->Step = 1024; //bytes transfered unsigned long processed = 0; int bytes = 1024; //loop for transfer while (processed < fSize) { if ((fSize - processed) < 1024) { bytes = (int)(fSize - processed); array<Byte>^ buf = gcnew array<Byte>(bytes); br->Read(buf, 0, bytes); writerStream->Write(buf, 0, buf->Length); pBarFilesTr->PerformStep(); processed = processed + (unsigned long)bytes; break; } else { br->Read(buffer, 0, 1024); writerStream->Write(buffer, 0, buffer->Length); pBarFilesTr->PerformStep(); processed = processed + 1024; } } array<Byte>^ bufsss = gcnew array<Byte>(100); writerStream->Read(bufsss,0,bufsss->Length); statusBar1->Text = "File was sent" ; btnSend->Enabled = true; fs->Close(); br->Close(); SystemSounds::Beep->Play(); newThread->Abort(); } catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } } UPDATE: ok, i can add checking if clientsocket->Receive(size1); equal zero, but why he begin receiving data again , in the ending of receiving. UPDATE:After adding this checking problem remains. AND WIN RAR SAY TO OPENING ARCHIVE - unexpected end of file! UPDATE:http://img153.imageshack.us/img153/3760/erorr.gif I think it continue receiving some bytes from client(that remains in the stream), but why existes cicle while (processed < fSize)

    Read the article

  • File using sockets .net, tranfering problem

    - by Sergei
    I have a client and server, client sending file to server. When i transfer files on my computer(in local) everything is ok(try to sen file over 700mb). When i try to sent file use Internet to my friend in the end of sending appears error on server "Input string is not in correct format".This error appears in this expression fSize = Convert::ToUInt64(tokenes[0]); - and i don't mind wht it's appear. File should be transfered and wait other transferring ps: sorry for too much code, but i want to find solution private: void CreateServer() { try{ IPAddress ^ipAddres = IPAddress::Parse(ipAdress); listener = gcnew System::Net::Sockets::TcpListener(ipAddres, port); listener->Start(); clientsocket =listener->AcceptSocket(); bool keepalive = true; array<wchar_t,1> ^split = gcnew array<wchar_t>(1){ '\0' }; array<wchar_t,1> ^split2 = gcnew array<wchar_t>(1){ '|' }; statusBar1->Text = "Connected" ; // while (keepalive) { array<Byte>^ size1 = gcnew array<Byte>(1024); clientsocket->Receive(size1); System::String ^notSplited = System::Text::Encoding::GetEncoding(1251)->GetString(size1); array<String^> ^ tokenes = notSplited->Split(split2); System::String ^fileName = tokenes[1]->ToString(); statusBar1->Text = "Receiving file" ; unsigned long fSize = 0; //IN THIS EXPRESSIN APPEARS ERROR fSize = Convert::ToUInt64(tokenes[0]); if (!Directory::Exists("Received")) Directory::CreateDirectory("Received"); System::String ^path = "Received\\"+ fileName; while (File::Exists(path)) { int dotPos = path->LastIndexOf('.'); if (dotPos == -1) { path += "[1]"; } else { path = path->Insert(dotPos, "[1]"); } } FileStream ^fs = gcnew FileStream(path, FileMode::CreateNew, FileAccess::Write); BinaryWriter ^f = gcnew BinaryWriter(fs); //bytes received unsigned long processed = 0; pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; // Set the initial value of the ProgressBar. pBarFilesTr->Value = 0; pBarFilesTr->Step = 1024; //loop for receive file array<Byte>^ buffer = gcnew array<Byte>(1024); while (processed < fSize) { if ((fSize - processed) < 1024) { int bytes ; array<Byte>^ buf = gcnew array<Byte>(1024); bytes = clientsocket->Receive(buf); if (bytes != 0) { f->Write(buf, 0, bytes); processed = processed + (unsigned long)bytes; pBarFilesTr->PerformStep(); } break; } else { int bytes = clientsocket->Receive(buffer); if (bytes != 0) { f->Write(buffer, 0, 1024); processed = processed + 1024; pBarFilesTr->PerformStep(); } else break; } } statusBar1->Text = "File was received" ; array<Byte>^ buf = gcnew array<Byte>(1); clientsocket->Send(buf,buf->Length,SocketFlags::None); f->Close(); fs->Close(); SystemSounds::Beep->Play(); } }catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } catch(System::Exception ^es) { MessageBox::Show(es->ToString()); } } private: void CreateClient() { clientsock = gcnew System::Net::Sockets::TcpClient(ipAdress, port); ns = clientsock->GetStream(); sr = gcnew StreamReader(ns); statusBar1->Text = "Connected" ; } private:void Send() { try{ OpenFileDialog ^openFileDialog1 = gcnew OpenFileDialog(); System::String ^filePath = ""; System::String ^fileName = ""; //file choose dialog if (openFileDialog1->ShowDialog() == System::Windows::Forms::DialogResult::OK) { filePath = openFileDialog1->FileName; fileName = openFileDialog1->SafeFileName; } else { MessageBox::Show("You must select a file", "Error", MessageBoxButtons::OK, MessageBoxIcon::Exclamation); return; } statusBar1->Text = "Sending file" ; NetworkStream ^writerStream = clientsock->GetStream(); System::Runtime::Serialization::Formatters::Binary::BinaryFormatter ^format = gcnew System::Runtime::Serialization::Formatters::Binary::BinaryFormatter(); array<Byte>^ buffer = gcnew array<Byte>(1024); FileStream ^fs = gcnew FileStream(filePath, FileMode::Open); BinaryReader ^br = gcnew BinaryReader(fs); //file size unsigned long fSize = (unsigned long)fs->Length; //transfer file size + name bFSize = Encoding::GetEncoding(1251)->GetBytes(Convert::ToString(fs->Length+"|"+fileName+"|")); writerStream->Write(bFSize, 0, bFSize->Length); //status bar pBarFilesTr->Visible = true; pBarFilesTr->Minimum = 0; pBarFilesTr->Maximum = (int)fSize; pBarFilesTr->Value = 0; // Set the initial value of the ProgressBar. pBarFilesTr->Step = 1024; //bytes transfered unsigned long processed = 0; int bytes = 1024; //loop for transfer while (processed < fSize) { if ((fSize - processed) < 1024) { bytes = (int)(fSize - processed); array<Byte>^ buf = gcnew array<Byte>(bytes); br->Read(buf, 0, bytes); writerStream->Write(buf, 0, buf->Length); pBarFilesTr->PerformStep(); processed = processed + (unsigned long)bytes; break; } else { br->Read(buffer, 0, 1024); writerStream->Write(buffer, 0, buffer->Length); pBarFilesTr->PerformStep(); processed = processed + 1024; } } array<Byte>^ bufsss = gcnew array<Byte>(100); writerStream->Read(bufsss,0,bufsss->Length); statusBar1->Text = "File was sent" ; btnSend->Enabled = true; fs->Close(); br->Close(); SystemSounds::Beep->Play(); newThread->Abort(); } catch(System::Net::Sockets::SocketException ^es) { MessageBox::Show(es->ToString()); } }

    Read the article

  • Logic error for Gauss elimination

    - by iwanttoprogram
    Logic error problem with the Gaussian Elimination code...This code was from my Numerical Methods text in 1990's. The code is typed in from the book- not producing correct output... Sample Run: SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS USING GAUSSIAN ELIMINATION This program uses Gaussian Elimination to solve the system Ax = B, where A is the matrix of known coefficients, B is the vector of known constants and x is the column matrix of the unknowns. Number of equations: 3 Enter elements of matrix [A] A(1,1) = 0 A(1,2) = -6 A(1,3) = 9 A(2,1) = 7 A(2,2) = 0 A(2,3) = -5 A(3,1) = 5 A(3,2) = -8 A(3,3) = 6 Enter elements of [b] vector B(1) = -3 B(2) = 3 B(3) = -4 SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS The solution is x(1) = 0.000000 x(2) = -1.#IND00 x(3) = -1.#IND00 Determinant = -1.#IND00 Press any key to continue . . . The code as copied from the text... //Modified Code from C Numerical Methods Text- June 2009 #include <stdio.h> #include <math.h> #define MAXSIZE 20 //function prototype int gauss (double a[][MAXSIZE], double b[], int n, double *det); int main(void) { double a[MAXSIZE][MAXSIZE], b[MAXSIZE], det; int i, j, n, retval; printf("\n \t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS"); printf("\n \t USING GAUSSIAN ELIMINATION \n"); printf("\n This program uses Gaussian Elimination to solve the"); printf("\n system Ax = B, where A is the matrix of known"); printf("\n coefficients, B is the vector of known constants"); printf("\n and x is the column matrix of the unknowns."); //get number of equations n = 0; while(n <= 0 || n > MAXSIZE) { printf("\n Number of equations: "); scanf ("%d", &n); } //read matrix A printf("\n Enter elements of matrix [A]\n"); for (i = 0; i < n; i++) for (j = 0; j < n; j++) { printf(" A(%d,%d) = ", i + 1, j + 1); scanf("%lf", &a[i][j]); } //read {B} vector printf("\n Enter elements of [b] vector\n"); for (i = 0; i < n; i++) { printf(" B(%d) = ", i + 1); scanf("%lf", &b[i]); } //call Gauss elimination function retval = gauss(a, b, n, &det); //print results if (retval == 0) { printf("\n\t SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS\n"); printf("\n\t The solution is"); for (i = 0; i < n; i++) printf("\n \t x(%d) = %lf", i + 1, b[i]); printf("\n \t Determinant = %lf \n", det); } else printf("\n \t SINGULAR MATRIX \n"); return 0; } /* Solves the system of equations [A]{x} = {B} using */ /* the Gaussian elimination method with partial pivoting. */ /* Parameters: */ /* n - number of equations */ /* a[n][n] - coefficient matrix */ /* b[n] - right-hand side vector */ /* *det - determinant of [A] */ int gauss (double a[][MAXSIZE], double b[], int n, double *det) { double tol, temp, mult; int npivot, i, j, l, k, flag; //initialization *det = 1.0; tol = 1e-30; //initial tolerance value npivot = 0; //mult = 0; //forward elimination for (k = 0; k < n; k++) { //search for max coefficient in pivot row- a[k][k] pivot element for (i = k + 1; i < n; i++) { if (fabs(a[i][k]) > fabs(a[k][k])) { //interchange row with maxium element with pivot row npivot++; for (l = 0; l < n; l++) { temp = a[i][l]; a[i][l] = a[k][l]; a[k][l] = temp; } temp = b[i]; b[i] = b[k]; b[k] = temp; } } //test for singularity if (fabs(a[k][k]) < tol) { //matrix is singular- terminate flag = 1; return flag; } //compute determinant- the product of the pivot elements *det = *det * a[k][k]; //eliminate the coefficients of X(I) for (i = k; i < n; i++) { mult = a[i][k] / a[k][k]; b[i] = b[i] - b[k] * mult; //compute constants for (j = k; j < n; j++) //compute coefficients a[i][j] = a[i][j] - a[k][j] * mult; } } //adjust the sign of the determinant if(npivot % 2 == 1) *det = *det * (-1.0); //backsubstitution b[n] = b[n] / a[n][n]; for(i = n - 1; i > 1; i--) { for(j = n; j > i + 1; j--) b[i] = b[i] - a[i][j] * b[j]; b[i] = b[i] / a[i - 1][i]; } flag = 0; return flag; } The solution should be: 1.058824, 1.823529, 0.882353 with det as -102.000000 Any insight is appreciated...

    Read the article

  • c# Unable to open file for reading

    - by Maks
    I'm writing a program that uses FileSystemWatcher to monitor changes to a given directory, and when it recieves OnCreated or OnChanged event, it copies those created/changed files to a specified directorie(s). At first I had problems with the fact that OnChanged/OnCreated events can be sent twice (not acceptable in case it needed to process 500MB file) but I made a way around this and with what I'm REALLY STUCKED with is getting the following IOException: The process cannot access the file 'C:\Where are Photos\bookmarks (11).html' because it is being used by another process. Thus, preventing the program from copying all the files it should. So as I mentioned, when user uses this program he/she specifes monitored directory, when user copies/creates/changes file in that directory, program should get OnCreated/OnChanged event and then copy that file to few other directories. Above error happens in all casess, if user copies few files that needs to owerwrite other ones in folder being monitored or when copying bulk of several files or even sometimes when copying one file in a monitored directory. Whole program is quite big so I'm sending the most important parts. OnCreated: private void OnCreated(object source, FileSystemEventArgs e) { AddLogEntry(e.FullPath, "created", ""); // Update last access data if it's file so the same file doesn't // get processed twice because of sending another event. if (fileType(e.FullPath) == 2) { lastPath = e.FullPath; lastTime = DateTime.Now; } // serves no purpose now, it will be remove soon string fileName = GetFileName(e.FullPath); // copies file from source to few other directories Copy(e.FullPath, fileName); Console.WriteLine("OnCreated: " + e.FullPath); } OnChanged: private void OnChanged(object source, FileSystemEventArgs e) { // is it directory if (fileType(e.FullPath) == 1) return; // don't mind directory changes itself // Only if enough time has passed or if it's some other file // because two events can be generated int timeDiff = ((TimeSpan)(DateTime.Now - lastTime)).Seconds; if ((timeDiff < minSecsDiff) && (e.FullPath.Equals(lastPath))) { Console.WriteLine("-- skipped -- {0}, timediff: {1}", e.FullPath, timeDiff); return; } // Update last access data for above to work lastPath = e.FullPath; lastTime = DateTime.Now; // Only if size is changed, the rest will handle other handlers if (e.ChangeType == WatcherChangeTypes.Changed) { AddLogEntry(e.FullPath, "changed", ""); string fileName = GetFileName(e.FullPath); Copy(e.FullPath, fileName); Console.WriteLine("OnChanged: " + e.FullPath); } } fileType: private int fileType(string path) { if (Directory.Exists(path)) return 1; // directory else if (File.Exists(path)) return 2; // file else return 0; } Copy: private void Copy(string srcPath, string fileName) { foreach (string dstDirectoy in paths) { string eventType = "copied"; string error = "noerror"; string path = ""; string dirPortion = ""; // in case directory needs to be made if (srcPath.Length > fsw.Path.Length) { path = srcPath.Substring(fsw.Path.Length, srcPath.Length - fsw.Path.Length); int pos = path.LastIndexOf('\\'); if (pos != -1) dirPortion = path.Substring(0, pos); } if (fileType(srcPath) == 1) { try { Directory.CreateDirectory(dstDirectoy + path); //Directory.CreateDirectory(dstDirectoy + fileName); eventType = "created"; } catch (IOException e) { eventType = "error"; error = e.Message; } } else { try { if (!overwriteFile && File.Exists(dstDirectoy + path)) continue; // create new dir anyway even if it exists just to be sure Directory.CreateDirectory(dstDirectoy + dirPortion); // copy file from where event occured to all specified directories using (FileStream fsin = new FileStream(srcPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (FileStream fsout = new FileStream(dstDirectoy + path, FileMode.Create, FileAccess.Write)) { byte[] buffer = new byte[32768]; int bytesRead = -1; while ((bytesRead = fsin.Read(buffer, 0, buffer.Length)) > 0) fsout.Write(buffer, 0, bytesRead); } } } catch (Exception e) { if ((e is IOException) && (overwriteFile == false)) { eventType = "skipped"; } else { eventType = "error"; error = e.Message; // attempt to find and kill the process locking the file. // failed, miserably System.Diagnostics.Process tool = new System.Diagnostics.Process(); tool.StartInfo.FileName = "handle.exe"; tool.StartInfo.Arguments = "\"" + srcPath + "\""; tool.StartInfo.UseShellExecute = false; tool.StartInfo.RedirectStandardOutput = true; tool.Start(); tool.WaitForExit(); string outputTool = tool.StandardOutput.ReadToEnd(); string matchPattern = @"(?<=\s+pid:\s+)\b(\d+)\b(?=\s+)"; foreach (Match match in Regex.Matches(outputTool, matchPattern)) { System.Diagnostics.Process.GetProcessById(int.Parse(match.Value)).Kill(); } Console.WriteLine("ERROR: {0}: [ {1} ]", e.Message, srcPath); } } } AddLogEntry(dstDirectoy + path, eventType, error); } } I checked everywhere in my program and whenever I use some file I use it in using block so even writing event to log (class for what I ommited since there is probably too much code already in post) wont lock the file, that is it shouldn't since all operations are using using statement block. I simply have no clue who's locking the file if not my program "copy" process from user through Windows or something else. Right now I have two possible "solutions" (I can't say they are clean solutions since they are hacks and as such not desireable). Since probably the problem is with fileType method (what else could lock the file?) I tried changing it to this, to simulate "blocking-until-ready-to-open" operation: fileType: private int fileType(string path) { FileStream fs = null; int ret = 0; bool run = true; if (Directory.Exists(path)) ret = 1; else { while (run) { try { fs = new FileStream(path, FileMode.Open); ret = 2; run = false; } catch (IOException) { } finally { if (fs != null) { fs.Close(); fs.Dispose(); } } } } return ret; } This is working as much as I could tell (test), but... it's hack, not to mention other deficients. The other "solution" I could try (I didn't test it yet) is using GC.Collect() somewhere at the end of fileType() method. Maybe even worse "solution" than previous one. Can someone pleas tell me, what on earth is locking the file, preventing it from opening and how can I fix that? What am I missing to see? Thanks in advance.

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • Minecraft Style Chunk building problem

    - by David Torrey
    I'm having some problems with speed in my chunk engine. I timed it out, and in its current state it takes a total ~5 seconds per chunk to fill each face's list. I have a check to see if each face of a block is visible and if it is not visible, it skips it and moves on. I'm using a dictionary (unordered map) because it makes sense memorywise to just not have an entry if there is no block. I've tracked my problem down to testing if there is an entry, and accessing an entry if it does exist. If I remove the tests to see if there is an entry in the dictionary for an adjacent block, or if the block type itself is seethrough, it runs within about 2-4 milliseconds. so here's my question: Is there a faster way to check for an entry in a dictionary than .ContainsKey()? As an aside, I tried TryGetValue() and it doesn't really help with the speed that much. If I remove the ContainsKey() and keep the test where it does the IsSeeThrough for each block, it halves the time, but it's still about 2-3 seconds. It only drops to 2-4ms if I remove BOTH checks. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Runtime.InteropServices; using OpenTK; using OpenTK.Graphics.OpenGL; using System.Drawing; namespace Anabelle_Lee { public enum BlockEnum { air = 0, dirt = 1, } [StructLayout(LayoutKind.Sequential,Pack=1)] public struct Coordinates<T1> { public T1 x; public T1 y; public T1 z; public override string ToString() { return "(" + x + "," + y + "," + z + ") : " + typeof(T1); } } public struct Sides<T1> { public T1 left; public T1 right; public T1 top; public T1 bottom; public T1 front; public T1 back; } public class Block { public int blockType; public bool SeeThrough() { switch (blockType) { case 0: return true; } return false ; } public override string ToString() { return ((BlockEnum)(blockType)).ToString(); } } class Chunk { private Dictionary<Coordinates<byte>, Block> mChunkData; //stores the block data private Sides<List<Coordinates<byte>>> mVBOVertexBuffer; private Sides<int> mVBOHandle; //private bool mIsChanged; private const byte mCHUNKSIZE = 16; public Chunk() { } public void InitializeChunk() { //create VBO references #if DEBUG Console.WriteLine ("Initializing Chunk"); #endif mChunkData = new Dictionary<Coordinates<byte> , Block>(); //mIsChanged = true; GL.GenBuffers(1, out mVBOHandle.left); GL.GenBuffers(1, out mVBOHandle.right); GL.GenBuffers(1, out mVBOHandle.top); GL.GenBuffers(1, out mVBOHandle.bottom); GL.GenBuffers(1, out mVBOHandle.front); GL.GenBuffers(1, out mVBOHandle.back); //make new list of vertexes for each face mVBOVertexBuffer.top = new List<Coordinates<byte>>(); mVBOVertexBuffer.bottom = new List<Coordinates<byte>>(); mVBOVertexBuffer.left = new List<Coordinates<byte>>(); mVBOVertexBuffer.right = new List<Coordinates<byte>>(); mVBOVertexBuffer.front = new List<Coordinates<byte>>(); mVBOVertexBuffer.back = new List<Coordinates<byte>>(); #if DEBUG Console.WriteLine("Chunk Initialized"); #endif } public void GenerateChunk() { #if DEBUG Console.WriteLine("Generating Chunk"); #endif for (byte i = 0; i < mCHUNKSIZE; i++) { for (byte j = 0; j < mCHUNKSIZE; j++) { for (byte k = 0; k < mCHUNKSIZE; k++) { Random blockLoc = new Random(); Coordinates<byte> randChunk = new Coordinates<byte> { x = i, y = j, z = k }; mChunkData.Add(randChunk, new Block()); mChunkData[randChunk].blockType = blockLoc.Next(0, 1); } } } #if DEBUG Console.WriteLine("Chunk Generated"); #endif } public void DeleteChunk() { //delete VBO references #if DEBUG Console.WriteLine("Deleting Chunk"); #endif GL.DeleteBuffers(1, ref mVBOHandle.left); GL.DeleteBuffers(1, ref mVBOHandle.right); GL.DeleteBuffers(1, ref mVBOHandle.top); GL.DeleteBuffers(1, ref mVBOHandle.bottom); GL.DeleteBuffers(1, ref mVBOHandle.front); GL.DeleteBuffers(1, ref mVBOHandle.back); //clear all vertex buffers ClearPolyLists(); #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void UpdateChunk() { #if DEBUG Console.WriteLine("Updating Chunk"); #endif ClearPolyLists(); //prepare buffers //for every entry in mChunkData map foreach(KeyValuePair<Coordinates<byte>,Block> feBlockData in mChunkData) { Coordinates<byte> checkBlock = new Coordinates<byte> { x = feBlockData.Key.x, y = feBlockData.Key.y, z = feBlockData.Key.z }; //check for polygonson the left side of the cube if (checkBlock.x > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x - 1, checkBlock.y, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } //check for polygons on the right side of the cube if (checkBlock.x < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x + 1, checkBlock.y, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } if (checkBlock.y > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y - 1, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } //check for polygons on the right side of the cube if (checkBlock.y < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y + 1, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } if (checkBlock.z > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z - 1)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } //check for polygons on the right side of the cube if (checkBlock.z < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z + 1)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } BuildBuffers(); #if DEBUG Console.WriteLine("Chunk Updated"); #endif } public void RenderChunk() { } public void LoadChunk() { #if DEBUG Console.WriteLine("Loading Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void SaveChunk() { #if DEBUG Console.WriteLine("Saving Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Saved"); #endif } private bool IsVisible(int pX,int pY,int pZ) { Block testBlock; Coordinates<byte> checkBlock = new Coordinates<byte> { x = Convert.ToByte(pX), y = Convert.ToByte(pY), z = Convert.ToByte(pZ) }; if (mChunkData.TryGetValue(checkBlock,out testBlock )) //if data exists { if (testBlock.SeeThrough() == true) //if existing data is not seethrough { return true; } } return true; } private void AddPoly(byte pX, byte pY, byte pZ, int BufferSide) { //create temp array GL.BindBuffer(BufferTarget.ArrayBuffer, BufferSide); if (BufferSide == mVBOHandle.front) { //front face mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.right) { //back face mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.top) { //left face mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.bottom) { //right face mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.front) { //top face mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.back) { //bottom face mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); } } private void BuildBuffers() { #if DEBUG Console.WriteLine("Building Chunk Buffers"); #endif GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.front); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.front.Count), mVBOVertexBuffer.front.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.back); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.back.Count), mVBOVertexBuffer.back.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.left); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.left.Count), mVBOVertexBuffer.left.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.right); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.right.Count), mVBOVertexBuffer.right.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.top); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.top.Count), mVBOVertexBuffer.top.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.bottom); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.bottom.Count), mVBOVertexBuffer.bottom.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer,0); #if DEBUG Console.WriteLine("Chunk Buffers Built"); #endif } private void ClearPolyLists() { #if DEBUG Console.WriteLine("Clearing Polygon Lists"); #endif mVBOVertexBuffer.top.Clear(); mVBOVertexBuffer.bottom.Clear(); mVBOVertexBuffer.left.Clear(); mVBOVertexBuffer.right.Clear(); mVBOVertexBuffer.front.Clear(); mVBOVertexBuffer.back.Clear(); #if DEBUG Console.WriteLine("Polygon Lists Cleared"); #endif } }//END CLASS }//END NAMESPACE

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >