Search Results

Search found 17859 results on 715 pages for 'static arrays'.

Page 188/715 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • Inside BackgroundWorker

    - by João Angelo
    The BackgroundWorker is a reusable component that can be used in different contexts, but sometimes with unexpected results. If you are like me, you have mostly used background workers while doing Windows Forms development due to the flexibility they offer for running a background task. They support cancellation and give events that signal progress updates and task completion. When used in Windows Forms, these events (ProgressChanged and RunWorkerCompleted) get executed back on the UI thread where you can freely access your form controls. However, the logic of the progress changed and worker completed events being invoked in the thread that started the background worker is not something you get directly from the BackgroundWorker, but instead from the fact that you are running in the context of Windows Forms. Take the following example that illustrates the use of a worker in three different scenarios: – Console Application or Windows Service; – Windows Forms; – WPF. using System; using System.ComponentModel; using System.Threading; using System.Windows.Forms; using System.Windows.Threading; class Program { static AutoResetEvent Synch = new AutoResetEvent(false); static void Main() { var bw1 = new BackgroundWorker(); var bw2 = new BackgroundWorker(); var bw3 = new BackgroundWorker(); Console.WriteLine("DEFAULT"); var unspecializedThread = new Thread(() => { OutputCaller(1); SynchronizationContext.SetSynchronizationContext( new SynchronizationContext()); bw1.DoWork += (sender, e) => OutputWork(1); bw1.RunWorkerCompleted += (sender, e) => OutputCompleted(1); // Uses default SynchronizationContext bw1.RunWorkerAsync(); }); unspecializedThread.IsBackground = true; unspecializedThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WINDOWS FORMS"); var windowsFormsThread = new Thread(() => { OutputCaller(2); SynchronizationContext.SetSynchronizationContext( new WindowsFormsSynchronizationContext()); bw2.DoWork += (sender, e) => OutputWork(2); bw2.RunWorkerCompleted += (sender, e) => OutputCompleted(2); // Uses WindowsFormsSynchronizationContext bw2.RunWorkerAsync(); Application.Run(); }); windowsFormsThread.IsBackground = true; windowsFormsThread.SetApartmentState(ApartmentState.STA); windowsFormsThread.Start(); Synch.WaitOne(); Console.WriteLine(); Console.WriteLine("WPF"); var wpfThread = new Thread(() => { OutputCaller(3); SynchronizationContext.SetSynchronizationContext( new DispatcherSynchronizationContext()); bw3.DoWork += (sender, e) => OutputWork(3); bw3.RunWorkerCompleted += (sender, e) => OutputCompleted(3); // Uses DispatcherSynchronizationContext bw3.RunWorkerAsync(); Dispatcher.Run(); }); wpfThread.IsBackground = true; wpfThread.SetApartmentState(ApartmentState.STA); wpfThread.Start(); Synch.WaitOne(); } static void OutputCaller(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerAsync".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputWork(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "DoWork".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); } static void OutputCompleted(int workerId) { Console.WriteLine( "bw{0}.{1} | Thread: {2} | IsThreadPool: {3}", workerId, "RunWorkerCompleted".PadRight(18), Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread); Synch.Set(); } } Output: //DEFAULT //bw1.RunWorkerAsync | Thread: 3 | IsThreadPool: False //bw1.DoWork | Thread: 4 | IsThreadPool: True //bw1.RunWorkerCompleted | Thread: 5 | IsThreadPool: True //WINDOWS FORMS //bw2.RunWorkerAsync | Thread: 6 | IsThreadPool: False //bw2.DoWork | Thread: 5 | IsThreadPool: True //bw2.RunWorkerCompleted | Thread: 6 | IsThreadPool: False //WPF //bw3.RunWorkerAsync | Thread: 7 | IsThreadPool: False //bw3.DoWork | Thread: 5 | IsThreadPool: True //bw3.RunWorkerCompleted | Thread: 7 | IsThreadPool: False As you can see the output between the first and remaining scenarios is somewhat different. While in Windows Forms and WPF the worker completed event runs on the thread that called RunWorkerAsync, in the first scenario the same event runs on any thread available in the thread pool. Another scenario where you can get the first behavior, even when on Windows Forms or WPF, is if you chain the creation of background workers, that is, you create a second worker in the DoWork event handler of an already running worker. Since the DoWork executes in a thread from the pool the second worker will use the default synchronization context and the completed event will not run in the UI thread.

    Read the article

  • spliiting code in java-don't know what's wrong [closed]

    - by ???? ?????
    I'm writing a code to split a file into many files with a size specified in the code, and then it will join these parts later. The problem is with the joining code, it doesn't work and I can't figure what is wrong! This is my code: import java.io.*; import java.util.*; public class StupidSplit { static final int Chunk_Size = 10; static int size =0; public static void main(String[] args) throws IOException { String file = "b.txt"; int chunks = DivideFile(file); System.out.print((new File(file)).delete()); System.out.print(JoinFile(file, chunks)); } static boolean JoinFile(String fname, int nChunks) { /* * Joins the chunks together. Chunks have been divided using DivideFile * function so the last part of filename will ".partxxxx" Checks if all * parts are together by matching number of chunks found against * "nChunks", then joins the file otherwise throws an error. */ boolean successful = false; File currentDirectory = new File(System.getProperty("user.dir")); // File[] fileList = currentDirectory.listFiles(); /* populate only the files having extension like "partxxxx" */ List<File> lst = new ArrayList<File>(); // Arrays.sort(fileList); for (File file : fileList) { if (file.isFile()) { String fnm = file.getName(); int lastDot = fnm.lastIndexOf('.'); // add to list which match the name given by "fname" and have //"partxxxx" as extension" if (fnm.substring(0, lastDot).equalsIgnoreCase(fname) && (fnm.substring(lastDot + 1)).substring(0, 4).equals("part")) { lst.add(file); } } } /* * sort the list - it will be sorted by extension only because we have * ensured that list only contains those files that have "fname" and * "part" */ File[] files = (File[]) lst.toArray(new File[0]); Arrays.sort(files); System.out.println("size ="+files.length); System.out.println("hello"); /* Ensure that number of chunks match the length of array */ if (files.length == nChunks-1) { File ofile = new File(fname); FileOutputStream fos; FileInputStream fis; byte[] fileBytes; int bytesRead = 0; try { fos = new FileOutputStream(ofile,true); for (File file : files) { fis = new FileInputStream(file); fileBytes = new byte[(int) file.length()]; bytesRead = fis.read(fileBytes, 0, (int) file.length()); assert(bytesRead == fileBytes.length); assert(bytesRead == (int) file.length()); fos.write(fileBytes); fos.flush(); fileBytes = null; fis.close(); fis = null; } fos.close(); fos = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find file"); successful = false; return successful; } catch (IOException ioe) { System.out.println("Cannot write to disk"); successful = false; return successful; } /* ensure size of file matches the size given by server */ successful = (ofile.length() == StupidSplit.size) ? true : false; } else { successful = false; } return successful; } static int DivideFile(String fname) { File ifile = new File(fname); FileInputStream fis; String newName; FileOutputStream chunk; //int fileSize = (int) ifile.length(); double fileSize = (double) ifile.length(); //int nChunks = 0, read = 0, readLength = Chunk_Size; int nChunks = 0, read = 0, readLength = Chunk_Size; byte[] byteChunk; try { fis = new FileInputStream(ifile); StupidSplit.size = (int)ifile.length(); while (fileSize > 0) { if (fileSize <= Chunk_Size) { readLength = (int) fileSize; } byteChunk = new byte[readLength]; read = fis.read(byteChunk, 0, readLength); fileSize -= read; assert(read==byteChunk.length); nChunks++; //newName = fname + ".part" + Integer.toString(nChunks - 1); newName = String.format("%s.part%09d", fname, nChunks - 1); chunk = new FileOutputStream(new File(newName)); chunk.write(byteChunk); chunk.flush(); chunk.close(); byteChunk = null; chunk = null; } fis.close(); System.out.println(nChunks); // fis = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find the given file"); System.exit(-1); } catch (IOException ioe) { System.out .println("Error while creating file chunks. Exiting program"); System.exit(-1); }System.out.println(nChunks); return nChunks; } } }

    Read the article

  • Project Euler #15

    - by Aistina
    Hey everyone, Last night I was trying to solve challenge #15 from Project Euler: Starting in the top left corner of a 2×2 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 20×20 grid? I figured this shouldn't be so hard, so I wrote a basic recursive function: const int gridSize = 20; // call with progress(0, 0) static int progress(int x, int y) { int i = 0; if (x < gridSize) i += progress(x + 1, y); if (y < gridSize) i += progress(x, y + 1); if (x == gridSize && y == gridSize) return 1; return i; } I verified that it worked for a smaller grids such as 2×2 or 3×3, and then set it to run for a 20×20 grid. Imagine my surprise when, 5 hours later, the program was still happily crunching the numbers, and only about 80% done (based on examining its current position/route in the grid). Clearly I'm going about this the wrong way. How would you solve this problem? I'm thinking it should be solved using an equation rather than a method like mine, but that's unfortunately not a strong side of mine. Update: I now have a working version. Basically it caches results obtained before when a n×m block still remains to be traversed. Here is the code along with some comments: // the size of our grid static int gridSize = 20; // the amount of paths available for a "NxM" block, e.g. "2x2" => 4 static Dictionary<string, long> pathsByBlock = new Dictionary<string, long>(); // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static long progress(long x, long y) { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // create a textual representation of the remaining // block, for use in the dictionary string block = (gridSize - x) + "x" + (gridSize - y); // if a same block has not been processed before if (!pathsByBlock.ContainsKey(block)) { // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // and cache the result! pathsByBlock[block] = i; } // self-explanatory :) return pathsByBlock[block]; } Calling it 20 times, for grids with size 1×1 through 20×20 produces the following output: There are 2 paths in a 1 sized grid 0,0110006 seconds There are 6 paths in a 2 sized grid 0,0030002 seconds There are 20 paths in a 3 sized grid 0 seconds There are 70 paths in a 4 sized grid 0 seconds There are 252 paths in a 5 sized grid 0 seconds There are 924 paths in a 6 sized grid 0 seconds There are 3432 paths in a 7 sized grid 0 seconds There are 12870 paths in a 8 sized grid 0,001 seconds There are 48620 paths in a 9 sized grid 0,0010001 seconds There are 184756 paths in a 10 sized grid 0,001 seconds There are 705432 paths in a 11 sized grid 0 seconds There are 2704156 paths in a 12 sized grid 0 seconds There are 10400600 paths in a 13 sized grid 0,001 seconds There are 40116600 paths in a 14 sized grid 0 seconds There are 155117520 paths in a 15 sized grid 0 seconds There are 601080390 paths in a 16 sized grid 0,0010001 seconds There are 2333606220 paths in a 17 sized grid 0,001 seconds There are 9075135300 paths in a 18 sized grid 0,001 seconds There are 35345263800 paths in a 19 sized grid 0,001 seconds There are 137846528820 paths in a 20 sized grid 0,0010001 seconds 0,0390022 seconds in total I'm accepting danben's answer, because his helped me find this solution the most. But upvotes also to Tim Goodman and Agos :) Bonus update: After reading Eric Lippert's answer, I took another look and rewrote it somewhat. The basic idea is still the same but the caching part has been taken out and put in a separate function, like in Eric's example. The result is some much more elegant looking code. // the size of our grid const int gridSize = 20; // magic. static Func<A1, A2, R> Memoize<A1, A2, R>(this Func<A1, A2, R> f) { // Return a function which is f with caching. var dictionary = new Dictionary<string, R>(); return (A1 a1, A2 a2) => { R r; string key = a1 + "x" + a2; if (!dictionary.TryGetValue(key, out r)) { // not in cache yet r = f(a1, a2); dictionary.Add(key, r); } return r; }; } // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static Func<long, long, long> progress = ((Func<long, long, long>)((long x, long y) => { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // self-explanatory :) return i; })).Memoize(); By the way, I couldn't think of a better way to use the two arguments as a key for the dictionary. I googled around a bit, and it seems this is a common solution. Oh well.

    Read the article

  • How to display a JSON error message?

    - by Tiny Giant Studios
    I'm currently developing a tumblr theme and have built a jQuery JSON thingamabob that uses the Tumblr API to do the following: The user would click on the "post type" link (e.g. Video Posts), at which stage jQuery would use JSON to grab all the posts that's related to that type and then dynamically display them in a designated area. Now everything works absolutely peachy, except that with Tumblr being Tumblr and their servers taking a knock every now and then, the Tumblr API thingy is sometimes offline. Now I can't foresee when this function will be down, which is why I want to display some generic error message if JSON (for whatever reason) was unable to load the post. You'll see I've already written some code to show an error message when jQuery can't find any posts related to that post type BUT it doesn't cover any server errors. Note: I sometimes get this error: Failed to load resource: the server responded with a status of 503 (Service Temporarily Unavailable) It is for this 503 Error message that I need to write some code, but I'm slightly clueless :) Here's the jQuery JSON code: $('ul.right li').find('a').click(function() { var postType = this.className; var count = 0; byCategory(postType); return false; function byCategory(postType, callback) { $.getJSON('{URL}/api/read/json?type=' + postType + '&callback=?', function(data) { var article = []; $.each(data.posts, function(i, item) { // i = index // item = data for a particular post switch(item.type) { case 'photo': article[i] = '<div class="post_wrap"><div class="photo" style="padding-bottom:5px;">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/XSTldh6ds/photo_icon.png" alt="type_icon"/></a>' + '<a href="' + item.url + '" title="{Title}"><img src="' + item['photo-url-500'] + '"alt="image" /></a></div></div>'; count = 1; break; case 'video': article[i] = '<div class="post_wrap"><div class="video" style="padding-bottom:5px;">' + '<a href="' + item.url + '" title="{Title}" class="type_icon">' + '<img src="http://static.tumblr.com/ewjv7ap/nuSldhclv/video_icon.png" alt="type_icon"/></a>' + '<span style="margin: auto;">' + item['video-player'] + '</span>' + '</div></div>'; count = 1; break; case 'audio': if (use_IE == true) { article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/R50ldh5uj/audio_icon.png" alt="type_icon"/></a>' + '<h3><a href="' + item.url + '">' + item['id3-artist'] +' - ' + item['id3-title'] + '</a></h3>' + '</div></div>'; } else { article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/R50ldh5uj/audio_icon.png" alt="type_icon"/></a>' + '<h3><a href="' + item.url + '">' + item['id3-artist'] +' - ' + item['id3-title'] + '</a></h3><div class="player">' + item['audio-player'] + '</div>' + '</div></div>'; }; count = 1; break; case 'regular': article[i] = '<div class="post_wrap"><div class="regular">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/dwxldhck1/regular_icon.png" alt="type_icon"/></a><h3><a href="' + item.url + '">' + item['regular-title'] + '</a></h3><div class="description_container">' + item['regular-body'] + '</div></div></div>'; count = 1; break; case 'quote': article[i] = '<div class="post_wrap"><div class="quote">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/loEldhcpr/quote_icon.png" alt="type_icon"/></a><blockquote><h3><a href="' + item.url + '" title="{Title}">' + item['quote-text'] + '</a></h3></blockquote><cite>- ' + item['quote-source'] + '</cite></div></div>'; count = 1; break; case 'conversation': article[i] = '<div class="post_wrap"><div class="chat">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/MVuldhcth/conversation_icon.png" alt="type_icon"/></a><h3><a href="' + item.url + '">' + item['conversation-title'] + '</a></h3></div></div>'; count = 1; break; case 'link': article[i] = '<div class="post_wrap"><div class="link">' + '<a href="' + item.url + '" title="{Title}" class="type_icon"><img src="http://static.tumblr.com/ewjv7ap/EQGldhc30/link_icon.png" alt="type_icon"/></a><h3><a href="' + item['link-url'] + '" target="_blank">' + item['link-text'] + '</a></h3></div></div>'; count = 1; break; default: alert('No Entries Found.'); }; }) // end each if (!(count == 0)) { $('#content_right') .hide('fast') .html('<div class="first_div"><span class="left_corner"></span><span class="right_corner"></span><h2>Displaying ' + postType + ' Posts Only</h2></div>' + article.join('')) .slideDown('fast') } else { $('#content_right') .hide('fast') .html('<div class="first_div"><span class="left_corner"></span><span class="right_corner"></span><h2>Hmmm, currently there are no ' + postType + ' posts to display</h2></div>') .slideDown('fast') } // end getJSON }); // end byCategory } }); If you'd like to see the demo in action, check out Elegantem but do note that everything might work absolutely fine for you (or not), depending on Tumblr's temperament.

    Read the article

  • Maze not generating properly. Out of bounds exception. need quick fix

    - by Dan Joseph Porcioncula
    My maze generator seems to have a problem. I am trying to generate something like the maze from http://mazeworks.com/mazegen/mazetut/index.htm . My program displays this http://a1.sphotos.ak.fbcdn.net/hphotos-ak-snc7/s320x320/374060_426350204045347_100000111130260_1880768_1572427285_n.jpg and the error Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1 at Grid.genRand(Grid.java:73) at Grid.main(Grid.java:35) How do I fix my generator program? import java.awt.*; import java.awt.Color; import java.awt.Component; import java.awt.Graphics; import javax.swing.*; import java.util.ArrayList; public class Grid extends Canvas { Cell[][] maze; int size; int pathSize; double width, height; ArrayList<int[]> coordinates = new ArrayList<int[]>(); public Grid(int size, int h, int w) { this.size = size; maze = new Cell[size][size]; for(int i = 0; i<size; i++){ for(int a =0; a<size; a++){ maze[i][a] = new Cell(); } } setPreferredSize(new Dimension(h, w)); } public static void main(String[] args) { JFrame y = new JFrame(); y.setLayout(new BorderLayout()); Grid f = new Grid(25, 400, 400); y.add(f, BorderLayout.CENTER); y.setSize(450, 450); y.setVisible(true); y.setDefaultCloseOperation(y.EXIT_ON_CLOSE); f.genRand(); f.repaint(); } public void push(int[] xy) { coordinates.add(xy); int i = coordinates.size(); coordinates.ensureCapacity(i++); } public int[] pop() { int[] x = coordinates.get((coordinates.size())-1); coordinates.remove((coordinates.size())-1); return x; } public int[] top() { return coordinates.get((coordinates.size())-1); } public void genRand(){ // create a CellStack (LIFO) to hold a list of cell locations [x] // set TotalCells = number of cells in grid int TotalCells = size*size; // choose a cell at random and call it CurrentCell int m = randomInt(size); int n = randomInt(size); Cell curCel = maze[m][n]; // set VisitedCells = 1 int visCel = 1,d=0; int[] q; int h,o = 0,p = 0; // while VisitedCells < TotalCells while( visCel < TotalCells){ // find all neighbors of CurrentCell with all walls intact if(maze[m-1][n].countWalls() == 4){d++;} if(maze[m+1][n].countWalls() == 4){d++;} if(maze[m][n-1].countWalls() == 4){d++;} if(maze[m][n+1].countWalls() == 4){d++;} // if one or more found if(d!=0){ Point[] ls = new Point[4]; ls[0] = new Point(m-1,n); ls[1] = new Point(m+1,n); ls[2] = new Point(m,n-1); ls[3] = new Point(m,n+1); // knock down the wall between it and CurrentCell h = randomInt(3); switch(h){ case 0: o = (int)(ls[0].getX()); p = (int)(ls[0].getY()); curCel.destroyWall(2); maze[o][p].destroyWall(1); break; case 1: o = (int)(ls[1].getX()); p = (int)(ls[1].getY()); curCel.destroyWall(1); maze[o][p].destroyWall(2); break; case 2: o = (int)(ls[2].getX()); p = (int)(ls[2].getY()); curCel.destroyWall(3); maze[o][p].destroyWall(0); break; case 3: o = (int)(ls[3].getX()); p = (int)(ls[3].getY()); curCel.destroyWall(0); maze[o][p].destroyWall(3); break; } // push CurrentCell location on the CellStack push(new int[] {m,n}); // make the new cell CurrentCell m = o; n = p; curCel = maze[m][n]; // add 1 to VisitedCells visCel++; } // else else{ // pop the most recent cell entry off the CellStack q = pop(); m = q[0]; n = q[1]; curCel = maze[m][n]; // make it CurrentCell // endIf } // endWhile } } public int randomInt(int s) { return (int)(s* Math.random());} public void paint(Graphics g) { int k, j; width = getSize().width; height = getSize().height; double htOfRow = height / (size); double wdOfRow = width / (size); //checks verticals - destroys east border of cell for (k = 0; k < size; k++) { for (j = 0; j < size; j++) { if(maze[k][j].checkWall(2)){ g.drawLine((int) (k * wdOfRow), (int) (j * htOfRow), (int) (k * wdOfRow), (int) ((j+1) * htOfRow)); }} } //checks horizontal - destroys north border of cell for (k = 0; k < size; k++) { for (j = 0; j < size; j++) { if(maze[k][j].checkWall(3)){ g.drawLine((int) (k * wdOfRow), (int) (j * htOfRow), (int) ((k+1) * wdOfRow), (int) (j * htOfRow)); }} } } } class Cell { private final static int NORTH = 0; private final static int EAST = 1; private final static int WEST = 2; private final static int SOUTH = 3; private final static int NO = 4; private final static int START = 1; private final static int END = 2; boolean[] wall = new boolean[4]; boolean[] border = new boolean[4]; boolean[] backtrack = new boolean[4]; boolean[] solution = new boolean[4]; private boolean isVisited = false; private int Key = 0; public Cell(){ for(int i=0;i<4;i++){wall[i] = true;} } public int countWalls(){ int i, k =0; for(i=0; i<4; i++) { if (wall[i] == true) {k++;} } return k;} public boolean checkWall(int x){ switch(x){ case 0: return wall[0]; case 1: return wall[1]; case 2: return wall[2]; case 3: return wall[3]; } return true; } public void destroyWall(int x){ switch(x){ case 0: wall[0] = false; break; case 1: wall[1] = false; break; case 2: wall[2] = false; break; case 3: wall[3] = false; break; } } public void setStart(int i){Key = i;} public int getKey(){return Key;} public boolean checkVisit(){return isVisited;} public void visitCell(){isVisited = true;} }

    Read the article

  • cannot access a site from Mac OSX Lion but can from other machines on network?

    - by house9
    SOLVED: The issue is with the hamachi client, hamachi is hi-jacking all of the 5.0.0.0/8 address block http://en.wikipedia.org/wiki/Hamachi_(software)#Criticism http://b.logme.in/2012/11/07/changes-to-hamachi-on-november-19th/ The fix on Mac LogMeIn Hamachi Preferences Settings Advanced Peer Connections IP protocol mode IPv6 only (default is both) If you can only connect to some of your network over IPv4 this 'fix' will NOT work for you ----- A few weeks ago I started using a service - https://semaphoreapp.com I think they made DNS changes a week ago and ever since I cannot access the site from my Mac OSX Lion (10.7.4) machine (my main development machine) but I can access the site from other machines on my network ipad windows machine MacMini (10.6.8) After some google searching I tried both of these dscacheutil -flushcache sudo killall -HUP mDNSResponder but no go, I've contacted semaphoreapp as well, but nothing so far - also of interest, one of my colleagues has the exact same problem, cannot access via Mac OSX Lion but can via windows machine, we work remotely and are not on the same ISP some additional info Lion (10.7.4) cannot access site host semaphoreapp.com semaphoreapp.com has address 5.9.53.16 ping semaphoreapp.com PING semaphoreapp.com (5.9.53.16): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 ping: sendto: No route to host Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 .... traceroute semaphoreapp.com traceroute to semaphoreapp.com (5.9.53.16), 64 hops max, 52 byte packets 1 * * * 2 * * * traceroute: sendto: No route to host 3 traceroute: wrote semaphoreapp.com 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote semaphoreapp.com 52 chars, ret=-1 .... and MacMini (10.6.8) can access it host semaphoreapp.com semaphoreapp.com has address 5.9.53.16 ping semaphoreapp.com PING semaphoreapp.com (5.9.53.16): 56 data bytes 64 bytes from 5.9.53.16: icmp_seq=0 ttl=44 time=191.458 ms 64 bytes from 5.9.53.16: icmp_seq=1 ttl=44 time=202.923 ms 64 bytes from 5.9.53.16: icmp_seq=2 ttl=44 time=180.746 ms 64 bytes from 5.9.53.16: icmp_seq=3 ttl=44 time=200.616 ms 64 bytes from 5.9.53.16: icmp_seq=4 ttl=44 time=178.818 ms .... traceroute semaphoreapp.com traceroute to semaphoreapp.com (5.9.53.16), 64 hops max, 52 byte packets 1 192.168.0.1 (192.168.0.1) 1.677 ms 1.446 ms 1.445 ms 2 * LOCAL ISP 11.957 ms * 3 etc... 10.704 ms 14.183 ms 9.341 ms 4 etc... 32.641 ms 12.147 ms 10.850 ms 5 etc.... 44.205 ms 54.563 ms 36.243 ms 6 vlan139.car1.seattle1.level3.net (4.53.145.165) 50.136 ms 45.873 ms 30.396 ms 7 ae-32-52.ebr2.seattle1.level3.net (4.69.147.182) 31.926 ms 40.507 ms 49.993 ms 8 ae-2-2.ebr2.denver1.level3.net (4.69.132.54) 78.129 ms 59.674 ms 49.905 ms 9 ae-3-3.ebr1.chicago2.level3.net (4.69.132.62) 99.019 ms 82.008 ms 76.074 ms 10 ae-1-100.ebr2.chicago2.level3.net (4.69.132.114) 96.185 ms 75.658 ms 75.662 ms 11 ae-6-6.ebr2.washington12.level3.net (4.69.148.145) 104.322 ms 105.563 ms 118.480 ms 12 ae-5-5.ebr2.washington1.level3.net (4.69.143.221) 93.646 ms 99.423 ms 96.067 ms 13 ae-41-41.ebr2.paris1.level3.net (4.69.137.49) 177.744 ms ae-44-44.ebr2.paris1.level3.net (4.69.137.61) 199.363 ms 198.405 ms 14 ae-47-47.ebr1.frankfurt1.level3.net (4.69.143.141) 176.876 ms ae-45-45.ebr1.frankfurt1.level3.net (4.69.143.133) 170.994 ms ae-46-46.ebr1.frankfurt1.level3.net (4.69.143.137) 177.308 ms 15 ae-61-61.csw1.frankfurt1.level3.net (4.69.140.2) 176.769 ms ae-91-91.csw4.frankfurt1.level3.net (4.69.140.14) 178.676 ms 173.644 ms 16 ae-2-70.edge7.frankfurt1.level3.net (4.69.154.75) 180.407 ms ae-3-80.edge7.frankfurt1.level3.net (4.69.154.139) 174.861 ms 176.578 ms 17 as33891-net.edge7.frankfurt1.level3.net (195.16.162.94) 175.448 ms 185.658 ms 177.081 ms 18 hos-bb1.juniper4.rz16.hetzner.de (213.239.240.202) 188.700 ms 190.332 ms 188.196 ms 19 hos-tr4.ex3k14.rz16.hetzner.de (213.239.233.98) 199.632 ms hos-tr3.ex3k14.rz16.hetzner.de (213.239.233.66) 185.938 ms hos-tr2.ex3k14.rz16.hetzner.de (213.239.230.34) 182.378 ms 20 * * * 21 * * * 22 * * * any ideas? EDIT: adding tcpdump MacMini (which can connect) while running - ping semaphoreapp.com sudo tcpdump -v -i en0 dst semaphoreapp.com Password: tcpdump: listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes 17:33:03.337165 IP (tos 0x0, ttl 64, id 20153, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->3129)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 0, length 64 17:33:04.337279 IP (tos 0x0, ttl 64, id 26049, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->1a21)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 1, length 64 17:33:05.337425 IP (tos 0x0, ttl 64, id 47854, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->c4f3)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 2, length 64 17:33:06.337548 IP (tos 0x0, ttl 64, id 24772, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->1f1e)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 3, length 64 17:33:07.337670 IP (tos 0x0, ttl 64, id 8171, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->5ff7)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 4, length 64 17:33:08.337816 IP (tos 0x0, ttl 64, id 35810, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->f3ff)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 5, length 64 17:33:09.337948 IP (tos 0x0, ttl 64, id 31120, offset 0, flags [none], proto ICMP (1), length 84, bad cksum 0 (->652)!) 192.168.0.6 > static.16.53.9.5.clients.your-server.de: ICMP echo request, id 61918, seq 6, length 64 ^C 7 packets captured 1047 packets received by filter 0 packets dropped by kernel OSX Lion (cannot connect) while running - ping semaphoreapp.com # wireless ~ $ sudo tcpdump -v -i en1 dst semaphoreapp.com Password: tcpdump: listening on en1, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 262 packets received by filter 0 packets dropped by kernel and # wired ~ $ sudo tcpdump -v -i en0 dst semaphoreapp.com tcpdump: listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 219 packets received by filter 0 packets dropped by kernel above output after Request timeout for icmp_seq 25 or 30 times from ping. I don't know much about tcpdump, but to me it doesn't seem like the ping requests are leaving my machine?

    Read the article

  • Having trouble binding a ksoap object to an ArrayList in Android

    - by Maskau
    I'm working on an app that calls a web service, then the webservice returns an array list. My problem is I am having trouble getting the data into the ArrayList and then displaying in a ListView. Any ideas what I am doing wrong? I know for a fact the web service returns an ArrayList. Everything seems to be working fine, just no data in the ListView or the ArrayList.....Thanks in advance! EDIT: So I added more code to the catch block of run() and now it's returning "org.ksoap2.serialization.SoapObject".....no more no less....and I am even more confused now... package com.maskau; import java.util.ArrayList; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.PropertyInfo; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.AndroidHttpTransport; import android.app.*; import android.os.*; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ListView; import android.widget.TextView; import android.view.View; import android.view.View.OnClickListener; public class Home extends Activity implements Runnable{ /** Called when the activity is first created. */ public static final String SOAP_ACTION = "http://bb.mcrcog.com/GetArtist"; public static final String METHOD_NAME = "GetArtist"; public static final String NAMESPACE = "http://bb.mcrcog.com"; public static final String URL = "http://bb.mcrcog.com/karaoke/service.asmx"; String wt; public static ProgressDialog pd; TextView text1; ListView lv; static EditText myEditText; static Button but; private ArrayList<String> Artist_Result = new ArrayList<String>(); @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); myEditText = (EditText)findViewById(R.id.myEditText); text1 = (TextView)findViewById(R.id.text1); lv = (ListView)findViewById(R.id.lv); but = (Button)findViewById(R.id.but); but.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { wt = ("Searching for " + myEditText.getText().toString()); text1.setText(""); pd = ProgressDialog.show(Home.this, "Working...", wt , true, false); Thread thread = new Thread(Home.this); thread.start(); } } ); } public void run() { try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); PropertyInfo pi = new PropertyInfo(); pi.setName("ArtistQuery"); pi.setValue(Home.myEditText.getText().toString()); request.addProperty(pi); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); AndroidHttpTransport at = new AndroidHttpTransport(URL); at.call(SOAP_ACTION, envelope); java.util.Vector<Object> rs = (java.util.Vector<Object>)envelope.getResponse(); if (rs != null) { for (Object cs : rs) { Artist_Result.add(cs.toString()); } } } catch (Exception e) { // Added this line, throws "org.ksoap2.serialization.SoapObject" when run Artist_Result.add(e.getMessage()); } handler.sendEmptyMessage(0); } private Handler handler = new Handler() { @Override public void handleMessage(Message msg) { ArrayAdapter<String> aa; aa = new ArrayAdapter<String>(Home.this, android.R.layout.simple_list_item_1, Artist_Result); lv.setAdapter(aa); try { if (Artist_Result.isEmpty()) { text1.setText("No Results"); } else { text1.setText("Complete"); myEditText.setText("Search Artist"); } } catch(Exception e) { text1.setText(e.getMessage()); } aa.notifyDataSetChanged(); pd.dismiss(); } }; }

    Read the article

  • Searching for Windows User SID's in C#

    - by Ubiquitous Che
    Context Context first - issues I'm trying to resolve are below. One of our clients has asked as to quote how long it would take for us to improve one of our applications. This application currently provides basic user authentication in the form of username/password combinations. This client would like the ability for their employees to log-in using the details of whatever Windows User account is currently logged in at the time of running the application. It's not a deal-breaker if I tell them know - but the client might be willing to pay the costs of development to add this feature to the application. It's worth looking into. Based on my hunting around, it seems like storing the user login details against Domain\Username will be problematic if those details are changed. But Windows User SID's aren't supposed to change at all. I've got the impression that it would be best to record Windows Users by SID - feel free to relieve me of that if I'm wrong. I've been having a fiddle with some Windows API calls. From within C#, grabbing the current user's SID is easy enough. I can already take any user's SID and process it using LookupAccountSid to get username and domain for display purposes. For the interested, my code for this is at the end of this post. That's just the tip of the iceberg, however. The two issues below are completely outside my experience. Not only do I not know how to implement them - I don't even known how to find out how to implement them, or what the pitfalls are on various systems. Any help getting myself aimed in the right direction would be very much appreciated. Issue 1) Getting hold of the local user at runtime is meaningless if that user hasn't been granted access to the application. We will need to add a new section to our application's 'administrator console' for adding Windows Users (or groups) and assigning within-app permissions against those users. Something like an 'Add Windows User Login' button that will raise a pop-up window that will allow the user to search for available Windows User accounts on the network (not just the local machine) to be added to the list of available application logins. If there's already a component in .NET or Windows that I can shanghai into doing this for me, it would make me a very happy man. Issue 2) I also want to know how to take a given Windows User SID and check it against a given Windows User Group (probably taken from a database). I'm not sure how to get started with this one either, though I expect it to be easier than the issue above. For the Interested [STAThread] static void Main(string[] args) { MessageBox.Show(WindowsUserManager.GetAccountNameFromSID(WindowsIdentity.GetCurrent().User.Value)); MessageBox.Show(WindowsUserManager.GetAccountNameFromSID("S-1-5-21-57989841-842925246-1957994488-1003")); } public static class WindowsUserManager { public static string GetAccountNameFromSID(string SID) { try { StringBuilder name = new StringBuilder(); uint cchName = (uint)name.Capacity; StringBuilder referencedDomainName = new StringBuilder(); uint cchReferencedDomainName = (uint)referencedDomainName.Capacity; WindowsUserManager.SID_NAME_USE sidUse; int err = (int)ESystemError.ERROR_SUCCESS; if (!WindowsUserManager.LookupAccountSid(null, SID, name, ref cchName, referencedDomainName, ref cchReferencedDomainName, out sidUse)) { err = Marshal.GetLastWin32Error(); if (err == (int)ESystemError.ERROR_INSUFFICIENT_BUFFER) { name.EnsureCapacity((int)cchName); referencedDomainName.EnsureCapacity((int)cchReferencedDomainName); err = WindowsUserManager.LookupAccountSid(null, SID, name, ref cchName, referencedDomainName, ref cchReferencedDomainName, out sidUse) ? (int)ESystemError.ERROR_SUCCESS : Marshal.GetLastWin32Error(); } } if (err != (int)ESystemError.ERROR_SUCCESS) throw new ApplicationException(String.Format("Could not retrieve acount name from SID. {0}", SystemExceptionManager.GetDescription(err))); return String.Format(@"{0}\{1}", referencedDomainName.ToString(), name.ToString()); } catch (Exception ex) { if (ex is ApplicationException) throw ex; throw new ApplicationException("Could not retrieve acount name from SID", ex); } } private enum SID_NAME_USE { SidTypeUser = 1, SidTypeGroup, SidTypeDomain, SidTypeAlias, SidTypeWellKnownGroup, SidTypeDeletedAccount, SidTypeInvalid, SidTypeUnknown, SidTypeComputer } [DllImport("advapi32.dll", EntryPoint = "GetLengthSid", CharSet = CharSet.Auto)] private static extern int GetLengthSid(IntPtr pSID); [DllImport("advapi32.dll", SetLastError = true)] private static extern bool ConvertStringSidToSid( string StringSid, out IntPtr ptrSid); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern bool LookupAccountSid( string lpSystemName, [MarshalAs(UnmanagedType.LPArray)] byte[] Sid, StringBuilder lpName, ref uint cchName, StringBuilder ReferencedDomainName, ref uint cchReferencedDomainName, out SID_NAME_USE peUse); private static bool LookupAccountSid( string lpSystemName, string stringSid, StringBuilder lpName, ref uint cchName, StringBuilder ReferencedDomainName, ref uint cchReferencedDomainName, out SID_NAME_USE peUse) { byte[] SID = null; IntPtr SID_ptr = IntPtr.Zero; try { WindowsUserManager.ConvertStringSidToSid(stringSid, out SID_ptr); int err = SID_ptr == IntPtr.Zero ? Marshal.GetLastWin32Error() : (int)ESystemError.ERROR_SUCCESS; if (SID_ptr == IntPtr.Zero || err != (int)ESystemError.ERROR_SUCCESS) throw new ApplicationException(String.Format("'{0}' could not be converted to a SID byte array. {1}", stringSid, SystemExceptionManager.GetDescription(err))); int size = (int)GetLengthSid(SID_ptr); SID = new byte[size]; Marshal.Copy(SID_ptr, SID, 0, size); } catch (Exception ex) { if (ex is ApplicationException) throw ex; throw new ApplicationException(String.Format("'{0}' could not be converted to a SID byte array. {1}.", stringSid, ex.Message), ex); } finally { // Always want to release the SID_ptr (if it exists) to avoid memory leaks. if (SID_ptr != IntPtr.Zero) Marshal.FreeHGlobal(SID_ptr); } return WindowsUserManager.LookupAccountSid(lpSystemName, SID, lpName, ref cchName, ReferencedDomainName, ref cchReferencedDomainName, out peUse); } }

    Read the article

  • std::basic_stringstream<unsigned char> won't compile with MSVC 10

    - by Michael J
    I'm trying to get UTF-8 chars to co-exist with ANSI 8-bit chars. My strategy has been to represent utf-8 chars as unsigned char so that appropriate overloads of functions can be used for the two character types. e.g. namespace MyStuff { typedef uchar utf8_t; typedef std::basic_string<utf8_t> U8string; } void SomeFunc(std::string &s); void SomeFunc(std::wstring &s); void SomeFunc(MyStuff::U8string &s); This all works pretty well until I try to use a stringstream. std::basic_ostringstream<MyStuff::utf8_t> ostr; ostr << 1; MSVC Visual C++ Express V10 won't compile this: c:\program files\microsoft visual studio 10.0\vc\include\xlocmon(213): warning C4273: 'id' : inconsistent dll linkage c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(65) : see previous definition of 'public: static std::locale::id std::numpunct<unsigned char>::id' c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(65) : while compiling class template static data member 'std::locale::id std::numpunct<_Elem>::id' with [ _Elem=Tk::utf8_t ] c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(1149) : see reference to function template instantiation 'const _Facet &std::use_facet<std::numpunct<_Elem>>(const std::locale &)' being compiled with [ _Facet=std::numpunct<Tk::utf8_t>, _Elem=Tk::utf8_t ] c:\program files\microsoft visual studio 10.0\vc\include\xlocnum(1143) : while compiling class template member function 'std::ostreambuf_iterator<_Elem,_Traits> std::num_put<_Elem,_OutIt>:: do_put(_OutIt,std::ios_base &,_Elem,std::_Bool) const' with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t>, _OutIt=std::ostreambuf_iterator<Tk::utf8_t,std::char_traits<Tk::utf8_t>> ] c:\program files\microsoft visual studio 10.0\vc\include\ostream(295) : see reference to class template instantiation 'std::num_put<_Elem,_OutIt>' being compiled with [ _Elem=Tk::utf8_t, _OutIt=std::ostreambuf_iterator<Tk::utf8_t,std::char_traits<Tk::utf8_t>> ] c:\program files\microsoft visual studio 10.0\vc\include\ostream(281) : while compiling class template member function 'std::basic_ostream<_Elem,_Traits> & std::basic_ostream<_Elem,_Traits>::operator <<(int)' with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t> ] c:\program files\microsoft visual studio 10.0\vc\include\sstream(526) : see reference to class template instantiation 'std::basic_ostream<_Elem,_Traits>' being compiled with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t> ] c:\users\michael\dvl\tmp\console\console.cpp(23) : see reference to class template instantiation 'std::basic_ostringstream<_Elem,_Traits,_Alloc>' being compiled with [ _Elem=Tk::utf8_t, _Traits=std::char_traits<Tk::utf8_t>, _Alloc=std::allocator<uchar> ] . c:\program files\microsoft visual studio 10.0\vc\include\xlocmon(213): error C2491: 'std::numpunct<_Elem>::id' : definition of dllimport static data member not allowed with [ _Elem=Tk::utf8_t ] Any ideas? ** Edited 19 June 2012 ** OK, I've gotten closer to understanding this, but not how to solve it. As we all know, static class variables get defined twice: once in the class definition and once outside the class definition which establishes storage space. e.g. // in .h file class CFoo { // ... static int x; }; // in .cpp file int CFoo::x = 42; Now in the VC10 headers we get something like this: template<class _Elem> class numpunct : public locale::facet { // ... _CRTIMP2_PURE static locale::id id; // ... } When the header is included in an application, _CRTIMP2_PURE is defined as __declspec(dllimport), which means that the variable is imported from a dll. Now the header also contains the following template<class _Elem> locale::id numpunct<_Elem>::id; Note the absence of the __declspec(dllimport) qualifier. i.e. The class declaration says that the static linkage of the id variable is in the dll, but for the general case, it gets declared outside the dll. For the known cases, there are specialisations. template locale::id numpunct<char>::id; template locale::id numpunct<wchar_t>::id; These are protected by #ifs so that they are only included when building the DLL. They are excluded otherwise. i.e. the char and wchar_t versions of numpunct ARE inside the dll So we have the class definition saying that id's storage is in the DLL, but that is only true for the char and wchar_t specialisations, meaning that my unsigned char version is doomed. :-( The only way forward that I can think of is to create my own specialisation: basically copying it from the header file and fixing it. This raises many issues. Anybody have a better idea?

    Read the article

  • Injection with google guice does not work anymore after obfuscation with proguard

    - by sme
    Has anyone ever tried to combine the use of google guice with obfuscation (in particular proguard)? The obfuscated version of my code does not work with google guice as guice complains about missing type parameters. This information seems to be erased by the transformation step that proguard does, even when the relevant classes are excluded from the obfuscation. The stack trace looks like this: com.google.inject.CreationException: Guice creation errors: 1) Cannot inject a Provider that has no type parameter while locating com.google.inject.Provider for parameter 0 at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setPasswordPanelProvider(SourceFile:499) at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setPasswordPanelProvider(SourceFile:499) while locating de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel for parameter 0 at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.o.a(SourceFile:38) 2) Cannot inject a Provider that has no type parameter while locating com.google.inject.Provider for parameter 0 at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setWindTurbineAccessGroupProvider(SourceFile:509) at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setWindTurbineAccessGroupProvider(SourceFile:509) while locating de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel for parameter 0 at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.o.a(SourceFile:38) 2 errors at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:354) at com.google.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:152) at com.google.inject.InjectorBuilder.build(InjectorBuilder.java:105) at com.google.inject.Guice.createInjector(Guice.java:92) at com.google.inject.Guice.createInjector(Guice.java:69) at com.google.inject.Guice.createInjector(Guice.java:59) I tried to create a small example (without using guice) that seems to reproduce the problem: package de.repower.common; import java.lang.reflect.Method; import java.lang.reflect.ParameterizedType; import java.lang.reflect.Type; class SomeClass<S> { } public class ParameterizedTypeTest { public void someMethod(SomeClass<Integer> param) { System.out.println("value: " + param); System.setProperty("my.dummmy.property", "hallo"); } private static void checkParameterizedMethod(ParameterizedTypeTest testObject) { System.out.println("checking parameterized method ..."); Method[] methods = testObject.getClass().getMethods(); for (Method method : methods) { if (method.getName().equals("someMethod")) { System.out.println("Found method " + method.getName()); Type[] types = method.getGenericParameterTypes(); Type parameterType = types[0]; if (parameterType instanceof ParameterizedType) { Type parameterizedType = ((ParameterizedType) parameterType).getActualTypeArguments()[0]; System.out.println("Parameter: " + parameterizedType); System.out.println("Class: " + ((Class) parameterizedType).getName()); } else { System.out.println("Failed: type ist not instance of ParameterizedType"); } } } } public static void main(String[] args) { System.out.println("Starting ..."); try { ParameterizedTypeTest someInstance = new ParameterizedTypeTest(); checkParameterizedMethod(someInstance); } catch (SecurityException e) { e.printStackTrace(); } } } If you run this code unsbfuscated, the output looks like this: Starting ... checking parameterized method ... Found method someMethod Parameter: class java.lang.Integer Class: java.lang.Integer But running the version obfuscated with proguard yields: Starting ... checking parameterized method ... Found method someMethod Failed: type ist not instance of ParameterizedType These are the options I used for obfuscation: -injars classes_eclipse\methodTest.jar -outjars classes_eclipse\methodTestObfuscated.jar -libraryjars 'C:\Program Files\Java\jre6\lib\rt.jar' -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -dontshrink -printusage classes_eclipse\shrink.txt -dontoptimize -dontpreverify -verbose -keep class **.ParameterizedTypeTest.class { <fields>; <methods>; } -keep class ** { <fields>; <methods>; } # Keep - Applications. Keep all application classes, along with their 'main' # methods. -keepclasseswithmembers public class * { public static void main(java.lang.String[]); } # Also keep - Enumerations. Keep the special static methods that are required in # enumeration classes. -keepclassmembers enum * { public static **[] values(); public static ** valueOf(java.lang.String); } # Also keep - Database drivers. Keep all implementations of java.sql.Driver. -keep class * extends java.sql.Driver # Also keep - Swing UI L&F. Keep all extensions of javax.swing.plaf.ComponentUI, # along with the special 'createUI' method. -keep class * extends javax.swing.plaf.ComponentUI { public static javax.swing.plaf.ComponentUI createUI(javax.swing.JComponent); } # Keep names - Native method names. Keep all native class/method names. -keepclasseswithmembers,allowshrinking class * { native <methods>; } # Keep names - _class method names. Keep all .class method names. This may be # useful for libraries that will be obfuscated again with different obfuscators. -keepclassmembers,allowshrinking class * { java.lang.Class class$(java.lang.String); java.lang.Class class$(java.lang.String,boolean); } Does anyone have an idea of how to solve this (apart from the obvious workaround to put the relevant files into a seperate jar and not obfuscate it)? Best regards, Stefan

    Read the article

  • click buttons error

    - by sara
    I will retrieve student information (id -number- name) from a database (MySQL) as a list view, each student have 2 buttons (delete - alert ) and radio buttons Every thing is ok, but how can I make an onClickListener, for example for the delete button because I try lots of examples, I heard that I can use (custom list or get view or direct onClickListener as in my code (but it is not working ) or Simple Cursor Adapter) I do not know what to use, I looked around for examples that can help me, but in my case but I did not find any so I hope this be reference for anyone have the same problem. this is my code which I use direct onClick with Simple Adapter public class ManageSection extends ListActivity { //ProgresogressDialog pDialog; private ProgressDialog pDialog; // Creating JSON Parser object // Creating JSON Parser object JSONParser jParser = new JSONParser(); //class boolean x =true; Button delete; ArrayList<HashMap<String, String>> studentList; //url to get all products list private static String url_all_student = "http://10.0.2.2/SmsPhp/view_student_info.php"; String cl; // JSON Node names private static final String TAG_SUCCESS = "success"; private static final String TAG_student = "student"; private static final String TAG_StudentID = "StudentID"; private static final String TAG_StudentNo = "StudentNo"; private static final String TAG_FullName = "FullName"; private static final String TAG_Avatar="Avatar"; HashMap<String, String> selected_student; // course JSONArray JSONArray student = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.manage_section); studentList = new ArrayList<HashMap<String, String>>(); ListView list1 = getListView(); list1.setAdapter(getListAdapter()); list1.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> adapterView, View view, int pos, long l) { selected_student =(HashMap<String, String>) studentList.get(pos); //member of your activity. delete =(Button)view.findViewById(R.id.DeleteStudent); cl=selected_student.get(TAG_StudentID); Toast.makeText(getBaseContext(),cl,Toast.LENGTH_LONG).show(); delete.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Log.d("id: ",cl); Toast.makeText(getBaseContext(),cl,Toast.LENGTH_LONG).show(); } }); } }); new LoadAllstudent().execute(); } /** * Background Async Task to Load all student by making HTTP Request * */ class LoadAllstudent extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(ManageSection.this); pDialog.setMessage("Loading student. Please wait..."); pDialog.setIndeterminate(false); } /** * getting All student from u r l * */ @Override protected String doInBackground(String... args) { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); // getting JSON string from URL JSONObject json = jParser.makeHttpRequest(url_all_student, "GET", params); // Check your log cat for JSON response Log.d("All student : ", json.toString()); try { // Checking for SUCCESS TAG int success = json.getInt(TAG_SUCCESS); if (success == 1) { // student found // Getting Array of course student = json.getJSONArray(TAG_student); // looping through All courses for (int i = 0; i < student.length(); i++)//course JSONArray { JSONObject c = student.getJSONObject(i); // read first // Storing each json item in variable String StudentID = c.getString(TAG_StudentID); String StudentNo = c.getString(TAG_StudentNo); String FullName = c.getString(TAG_FullName); // String Avatar = c.getString(TAG_Avatar); // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); // adding each child node to HashMap key => value map.put(TAG_StudentID, StudentID); map.put(TAG_StudentNo, StudentNo); map.put(TAG_FullName, FullName); // adding HashList to ArrayList studentList.add(map); } } else { x=false; } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { // dismiss the dialog after getting all products pDialog.dismiss(); if (x==false) Toast.makeText(getBaseContext(),"no student" ,Toast.LENGTH_LONG).show(); ListAdapter adapter = new SimpleAdapter( ManageSection.this, studentList, R.layout.list_student, new String[] { TAG_StudentID, TAG_StudentNo,TAG_FullName}, new int[] { R.id.StudentID, R.id.StudentNo,R.id.FullName}); setListAdapter(adapter); // Updating parsed JSON data into ListView } } } So what do you think, why doesn't the delete button work? There is no error in my log cat. What is the alternative way ?.. what should I do ?

    Read the article

  • How to give position zero of spinner a prompt value?

    - by Eugene H
    The database is then transferring the data to a spinner which I want to leave position 0 blank so I can add a item to the spinner with no value making it look like a prompt. I have been going at it all day. FAil after Fail MainActivity public class MainActivity extends Activity { Button AddBtn; EditText et; EditText cal; Spinner spn; SQLController SQLcon; ProgressDialog PD; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); AddBtn = (Button) findViewById(R.id.addbtn_id); et = (EditText) findViewById(R.id.et_id); cal = (EditText) findViewById(R.id.et_cal); spn = (Spinner) findViewById(R.id.spinner_id); spn.setOnItemSelectedListener(new OnItemSelectedListenerWrapper( new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) { SQLcon.open(); Cursor c = SQLcon.readData(); if (c.moveToPosition(pos)) { String name = c.getString(c .getColumnIndex(DBhelper.MEMBER_NAME)); String calories = c.getString(c .getColumnIndex(DBhelper.KEY_CALORIES)); et.setText(name); cal.setText(calories); } SQLcon.close(); // closing database } @Override public void onNothingSelected(AdapterView<?> parent) { // TODO Auto-generated method stub } })); SQLcon = new SQLController(this); // opening database SQLcon.open(); loadtospinner(); AddBtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { new MyAsync().execute(); } }); } public void loadtospinner() { ArrayList<String> al = new ArrayList<String>(); Cursor c = SQLcon.readData(); c.moveToFirst(); while (!c.isAfterLast()) { String name = c.getString(c.getColumnIndex(DBhelper.MEMBER_NAME)); String calories = c.getString(c .getColumnIndex(DBhelper.KEY_CALORIES)); al.add(name + ", Calories: " + calories); c.moveToNext(); } ArrayAdapter<String> aa1 = new ArrayAdapter<String>( getApplicationContext(), android.R.layout.simple_spinner_item, al); spn.setAdapter(aa1); // closing database SQLcon.close(); } private class MyAsync extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { super.onPreExecute(); PD = new ProgressDialog(MainActivity.this); PD.setTitle("Please Wait.."); PD.setMessage("Loading..."); PD.setCancelable(false); PD.show(); } @Override protected Void doInBackground(Void... params) { String name = et.getText().toString(); String calories = cal.getText().toString(); // opening database SQLcon.open(); // insert data into table SQLcon.insertData(name, calories); return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); loadtospinner(); PD.dismiss(); } } } DataBase public class SQLController { private DBhelper dbhelper; private Context ourcontext; private SQLiteDatabase database; public SQLController(Context c) { ourcontext = c; } public SQLController open() throws SQLException { dbhelper = new DBhelper(ourcontext); database = dbhelper.getWritableDatabase(); return this; } public void close() { dbhelper.close(); } public void insertData(String name, String calories) { ContentValues cv = new ContentValues(); cv.put(DBhelper.MEMBER_NAME, name); cv.put(DBhelper.KEY_CALORIES, calories); database.insert(DBhelper.TABLE_MEMBER, null, cv); } public Cursor readData() { String[] allColumns = new String[] { DBhelper.MEMBER_ID, DBhelper.MEMBER_NAME, DBhelper.KEY_CALORIES }; Cursor c = database.query(DBhelper.TABLE_MEMBER, allColumns, null, null, null, null, null); if (c != null) { c.moveToFirst(); } return c; } } Helper public class DBhelper extends SQLiteOpenHelper { // TABLE INFORMATTION public static final String TABLE_MEMBER = "member"; public static final String MEMBER_ID = "_id"; public static final String MEMBER_NAME = "name"; public static final String KEY_CALORIES = "calories"; // DATABASE INFORMATION static final String DB_NAME = "MEMBER.DB"; static final int DB_VERSION = 2; // TABLE CREATION STATEMENT private static final String CREATE_TABLE = "create table " + TABLE_MEMBER + "(" + MEMBER_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + MEMBER_NAME + " TEXT NOT NULL," + KEY_CALORIES + " INT NOT NULL);"; public DBhelper(Context context) { super(context, DB_NAME, null, DB_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_TABLE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub db.execSQL("DROP TABLE IF EXISTS " + TABLE_MEMBER); onCreate(db); } }

    Read the article

  • Thread Synchronization and Synchronization Primitives

    When considering synchronization in an application, the decision truly depends on what the application and its worker threads are going to do. I would use synchronization if two or more threads could possibly manipulate the same instance of an object at the same time. An example of this in C# can be demonstrated through the use of storing data in a static object. A static object is initialized once per application and the data within the object can be accessed by all threads. I would use the synchronization primitives to prevent any data from being manipulated by multiple threads simultaneously. This would reduce any data corruption from occurring within the object. On the other hand if all the threads used non static objects and were independent of the other tasks there would be no need to use synchronization. Synchronization Primitives in C#: Basic Blocking Locking Signaling Non-Blocking Synchronization Constructs The Basic Blocking methods include Sleep, Join, and Task.Wait.  These methods force threads to wait until other threads have completed. In addition, these methods can also force a thread to wait a set amount of time before continuing to work.   The Locking primitive prevents a thread from entering a critical section of code while another thread is in the same critical section.  If another thread attempts to enter a locked code, it will wait, until the code block is released. The Signaling primitive allows a thread to temporarily pause work until receiving a notification from another thread that it is ok to continue working. The Signaling primitive removes the need for polling.The Non-Blocking Synchronization Constructs protect access to a common field by calling upon processor primitives.

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • STUN-server using AWS

    - by Yrlec
    We are about to hire some consultants to help us set up an AWS-based server environment that will enable us to handle NAT Traversal for our P2P application. One important part of the NAT Traversal infrastructure is the STUN-server (http://en.wikipedia.org/wiki/STUN). They just told us that in order for the STUN-server to work you must have two public static IP-addresses pointing to the same server. To more specific they said this: It appears that you need 2 static IPs for each server for the STUN to work. Please note, these IPs have to be put into the configuration file, therefore, each time you restart the instance you have to make sure you either use the same IPs or you have to update configuration. As you plan to use AWS for your installation, please confirm that you can have 2 static IP for each server. Our question is if this is possible using AWS and if so, how? If not, do you know any other way to set up a STUN-server using AWS?

    Read the article

  • Linux: setting up an elite/high-anonimity Web proxy on a dedicated server

    - by YellowSquirrel
    I'm renting a dedicated server which I'd like to use to "surf the Web": basically I want to always surf the Web from the same static IP (the one of my dedicated server). I can do it by running Xvnc/FreeNX on the dedicated server, but this is kinda slow and clumsy (I tried it). What are the steps needed to install an "elite/high-anonimity" Web proxy on a dedicated (Debian) Linux server knowing that my two requirements are: I'm the only person that needs access to the proxy all I want is that my broadband (dynamic) IP is completely hidden (I want to always surf from my dedicated server's IP). Note that using the static IP people can find my domains and my real name and I'm perfectly fine with that (actually it is what I want). What I don't want is people knowing from which dynamic IP (broadband) I'm connecting. What are the steps needed to do that? (basically I don't care about "anonimity", what I want is to appear to surf from a static IP and I think I need what is called an "elite" Web proxy to do that, but I'm not sure) Technical infos and sample configuration most welcome :)

    Read the article

  • Connect ViewModel and View using Unity

    - by brainbox
    In this post i want to describe the approach of connecting View and ViewModel which I'm using in my last project.The main idea is to do it during resolve inside of unity container. It can be achived using InjectionFactory introduced in Unity 2.0 public static class MVVMUnityExtensions{    public static void RegisterView<TView, TViewModel>(this IUnityContainer container) where TView : FrameworkElement    {        container.RegisterView<TView, TView, TViewModel>();    }    public static void RegisterView<TViewFrom, TViewTo, TViewModel>(this IUnityContainer container)        where TViewTo : FrameworkElement, TViewFrom    {        container.RegisterType<TViewFrom>(new InjectionFactory(            c =>            {                var model = c.Resolve<TViewModel>();                var view = Activator.CreateInstance<TViewTo>();                view.DataContext = model;                return view;            }         ));    }}}And here is the sample how it could be used:var unityContainer = new UnityContainer();unityContainer.RegisterView<IFooView, FooView, FooViewModel>();IFooView view = unityContainer.Resolve<IFooView>(); // view with injected viewmodel in its datacontextPlease tell me your prefered way to connect viewmodel and view.

    Read the article

  • Inside the Concurrent Collections: ConcurrentBag

    - by Simon Cooper
    Unlike the other concurrent collections, ConcurrentBag does not really have a non-concurrent analogy. As stated in the MSDN documentation, ConcurrentBag is optimised for the situation where the same thread is both producing and consuming items from the collection. We'll see how this is the case as we take a closer look. Again, I recommend you have ConcurrentBag open in a decompiler for reference. Thread Statics ConcurrentBag makes heavy use of thread statics - static variables marked with ThreadStaticAttribute. This is a special attribute that instructs the CLR to scope any values assigned to or read from the variable to the executing thread, not globally within the AppDomain. This means that if two different threads assign two different values to the same thread static variable, one value will not overwrite the other, and each thread will see the value they assigned to the variable, separately to any other thread. This is a very useful function that allows for ConcurrentBag's concurrency properties. You can think of a thread static variable: [ThreadStatic] private static int m_Value; as doing the same as: private static Dictionary<Thread, int> m_Values; where the executing thread's identity is used to automatically set and retrieve the corresponding value in the dictionary. In .NET 4, this usage of ThreadStaticAttribute is encapsulated in the ThreadLocal class. Lists of lists ConcurrentBag, at its core, operates as a linked list of linked lists: Each outer list node is an instance of ThreadLocalList, and each inner list node is an instance of Node. Each outer ThreadLocalList is owned by a particular thread, accessible through the thread local m_locals variable: private ThreadLocal<ThreadLocalList<T>> m_locals It is important to note that, although the m_locals variable is thread-local, that only applies to accesses through that variable. The objects referenced by the thread (each instance of the ThreadLocalList object) are normal heap objects that are not specific to any thread. Thinking back to the Dictionary analogy above, if each value stored in the dictionary could be accessed by other means, then any thread could access the value belonging to other threads using that mechanism. Only reads and writes to the variable defined as thread-local are re-routed by the CLR according to the executing thread's identity. So, although m_locals is defined as thread-local, the m_headList, m_nextList and m_tailList variables aren't. This means that any thread can access all the thread local lists in the collection by doing a linear search through the outer linked list defined by these variables. Adding items So, onto the collection operations. First, adding items. This one's pretty simple. If the current thread doesn't already own an instance of ThreadLocalList, then one is created (or, if there are lists owned by threads that have stopped, it takes control of one of those). Then the item is added to the head of that thread's list. That's it. Don't worry, it'll get more complicated when we account for the other operations on the list! Taking & Peeking items This is where it gets tricky. If the current thread's list has items in it, then it peeks or removes the head item (not the tail item) from the local list and returns that. However, if the local list is empty, it has to go and steal another item from another list, belonging to a different thread. It iterates through all the thread local lists in the collection using the m_headList and m_nextList variables until it finds one that has items in it, and it steals one item from that list. Up to this point, the two threads had been operating completely independently. To steal an item from another thread's list, the stealing thread has to do it in such a way as to not step on the owning thread's toes. Recall how adding and removing items both operate on the head of the thread's linked list? That gives us an easy way out - a thread trying to steal items from another thread can pop in round the back of another thread's list using the m_tail variable, and steal an item from the back without the owning thread knowing anything about it. The owning thread can carry on completely independently, unaware that one of its items has been nicked. However, this only works when there are at least 3 items in the list, as that guarantees there will be at least one node between the owning thread performing operations on the list head and the thread stealing items from the tail - there's no chance of the two threads operating on the same node at the same time and causing a race condition. If there's less than three items in the list, then there does need to be some synchronization between the two threads. In this case, the lock on the ThreadLocalList object is used to mediate access to a thread's list when there's the possibility of contention. Thread synchronization In ConcurrentBag, this is done using several mechanisms: Operations performed by the owner thread only take out the lock when there are less than three items in the collection. With three or greater items, there won't be any conflict with a stealing thread operating on the tail of the list. If a lock isn't taken out, the owning thread sets the list's m_currentOp variable to a non-zero value for the duration of the operation. This indicates to all other threads that there is a non-locked operation currently occuring on that list. The stealing thread always takes out the lock, to prevent two threads trying to steal from the same list at the same time. After taking out the lock, the stealing thread spinwaits until m_currentOp has been set to zero before actually performing the steal. This ensures there won't be a conflict with the owning thread when the number of items in the list is on the 2-3 item borderline. If any add or remove operations are started in the meantime, and the list is below 3 items, those operations try to take out the list's lock and are blocked until the stealing thread has finished. This allows a thread to steal an item from another thread's list without corrupting it. What about synchronization in the collection as a whole? Collection synchronization Any thread that operates on the collection's global structure (accessing anything outside the thread local lists) has to take out the collection's global lock - m_globalListsLock. This single lock is sufficient when adding a new thread local list, as the items inside each thread's list are unaffected. However, what about operations (such as Count or ToArray) that need to access every item in the collection? In order to ensure a consistent view, all operations on the collection are stopped while the count or ToArray is performed. This is done by freezing the bag at the start, performing the global operation, and unfreezing at the end: The global lock is taken out, to prevent structural alterations to the collection. m_needSync is set to true. This notifies all the threads that they need to take out their list's lock irregardless of what operation they're doing. All the list locks are taken out in order. This blocks all locking operations on the lists. The freezing thread waits for all current lockless operations to finish by spinwaiting on each m_currentOp field. The global operation can then be performed while the bag is frozen, but no other operations can take place at the same time, as all other threads are blocked on a list's lock. Then, once the global operation has finished, the locks are released, m_needSync is unset, and normal concurrent operation resumes. Concurrent principles That's the essence of how ConcurrentBag operates. Each thread operates independently on its own local list, except when they have to steal items from another list. When stealing, only the stealing thread is forced to take out the lock; the owning thread only has to when there is the possibility of contention. And a global lock controls accesses to the structure of the collection outside the thread lists. Operations affecting the entire collection take out all locks in the collection to freeze the contents at a single point in time. So, what principles can we extract here? Threads operate independently Thread-static variables and ThreadLocal makes this easy. Threads operate entirely concurrently on their own structures; only when they need to grab data from another thread is there any thread contention. Minimised lock-taking Even when two threads need to operate on the same data structures (one thread stealing from another), they do so in such a way such that the probability of actually blocking on a lock is minimised; the owning thread always operates on the head of the list, and the stealing thread always operates on the tail. Management of lockless operations Any operations that don't take out a lock still have a 'hook' to force them to lock when necessary. This allows all operations on the collection to be stopped temporarily while a global snapshot is taken. Hopefully, such operations will be short-lived and infrequent. That's all the concurrent collections covered. I hope you've found it as informative and interesting as I have. Next, I'll be taking a closer look at ThreadLocal, which I came across while analyzing ConcurrentBag. As you'll see, the operation of this class deserves a much closer look.

    Read the article

  • Dual Nic, one keeps dropping

    - by user1215018
    I'm running windows server 2008 r2 on a dell poweredge 2850. I have 2 NICs, one is configured behind a firewall with a dhcp server on the main local LAN and another one has it's own dedicated connection to one of our 13 static IPs. So in a nutshell we have 2 of our static IPs going to this server, one indirectly through a firewall/dhcp server, and the other directly. I am trying to reach IIS on port 80 and port 443. The problem is that the NIC with the direct connection (NIC2) keeps dropping and says either "No internet connection" or "Unauthenticated". However, the NIC behind the firewall (NIC1) has no problems at all. Update: This is the second time this has happened in 3 days and each time the fix has been enabling the dhcp client on the NIC, allowing it to error out to a 169.x.x.x address, then re-enabling the nic with it's static IP assignment.

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • KVM-Guests can't get past bridge - no internet connection

    - by tmn29a
    I'm running a backported KVM on a Debian Squeeze. ATM the KVM-Guest can't connect to the internet through the bridge I have set up. The guests can reach each other, the host but nothing outside. I can neither ping, nslookup or do anything to a remote address. The guest are configured to have a static IP. When I didn;t have the bridge but a virtual bridge (the KVM-default) the guest could connect fine. After setting up the bridge things broke, so I think the problem lies there. # The loopback network interface auto lo br0 iface lo inet loopback # Bonding Interface auto bond0 iface bond0 inet static address 10.XXX.XXX.84 netmask 255.255.255.192 network 10.XXX.XXX.64 gateway 10.XXX.XXX.65 slaves eth0 eth1 bond_mode active-backup bond_miimon 100 bond_downdelay 200 bond_updelay 200 iface br0 inet static bridge_ports eth0 eth1 address 172.xxx.xxx.65 broadcast 172.xxx.xxx.127 netmask 255.255.255.192 gateway 172.xxx.xxx.65 bridge_stp on bridge_maxwait 0 Thanks in advance for your help !

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • C# 4.0: Covariance And Contravariance In Generics

    - by Paulo Morgado
    C# 4.0 (and .NET 4.0) introduced covariance and contravariance to generic interfaces and delegates. But what is this variance thing? According to Wikipedia, in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometrical or physical entities changes when passing from one coordinate system to another.(*) But what does this have to do with C# or .NET? In type theory, a the type T is greater (>) than type S if S is a subtype (derives from) T, which means that there is a quantitative description for types in a type hierarchy. So, how does covariance and contravariance apply to C# (and .NET) generic types? In C# (and .NET), variance applies to generic type parameters and not to the resulting generic type. A generic type parameter is: covariant if the ordering of the generic types follows the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. contravariant if the ordering of the generic types is reversed from the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. invariant if neither of the above apply. If this definition is applied to arrays, we can see that arrays have always been covariant because this is valid code: object[] objectArray = new string[] { "string 1", "string 2" }; objectArray[0] = "string 3"; objectArray[1] = new object(); However, when we try to run this code, the second assignment will throw an ArrayTypeMismatchException. Although the compiler was fooled into thinking this was valid code because an object is being assigned to an element of an array of object, at run time, there is always a type check to guarantee that the runtime type of the definition of the elements of the array is greater or equal to the instance being assigned to the element. In the above example, because the runtime type of the array is array of string, the first assignment of array elements is valid because string = string and the second is invalid because string = object. This leads to the conclusion that, although arrays have always been covariant, they are not safely covariant – code that compiles is not guaranteed to run without errors. In C#, the way to define that a generic type parameter as covariant is using the out generic modifier: public interface IEnumerable<out T> { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> { T Current { get; } bool MoveNext(); } Notice the convenient use the pre-existing out keyword. Besides the benefit of not having to remember a new hypothetic covariant keyword, out is easier to remember because it defines that the generic type parameter can only appear in output positions — read-only properties and method return values. In a similar way, the way to define a type parameter as contravariant is using the in generic modifier: public interface IComparer<in T> { int Compare(T x, T y); } Once again, the use of the pre-existing in keyword makes it easier to remember that the generic type parameter can only be used in input positions — write-only properties and method non ref and non out parameters. Because covariance and contravariance apply only to the generic type parameters, a generic type definition can have both covariant and contravariant generic type parameters in its definition: public delegate TResult Func<in T, out TResult>(T arg); A generic type parameter that is not marked covariant (out) or contravariant (in) is invariant. All the types in the .NET Framework where variance could be applied to its generic type parameters have been modified to take advantage of this new feature. In summary, the rules for variance in C# (and .NET) are: Variance in type parameters are restricted to generic interface and generic delegate types. A generic interface or generic delegate type can have both covariant and contravariant type parameters. Variance applies only to reference types; if you specify a value type for a variant type parameter, that type parameter is invariant for the resulting constructed type. Variance does not apply to delegate combination. That is, given two delegates of types Action<Derived> and Action<Base>, you cannot combine the second delegate with the first although the result would be type safe. Variance allows the second delegate to be assigned to a variable of type Action<Derived>, but delegates can combine only if their types match exactly. If you want to learn more about variance in C# (and .NET), you can always read: Covariance and Contravariance in Generics — MSDN Library Exact rules for variance validity — Eric Lippert Events get a little overhaul in C# 4, Afterward: Effective Events — Chris Burrows Note: Because variance is a feature of .NET 4.0 and not only of C# 4.0, all this also applies to Visual Basic 10.

    Read the article

  • Cannot Access Web Interface on HP 2510G

    - by Stephen
    I am currently setting up a new infrastructure with HP 2510s as edge switches and an HP E5406 as the main switch. I also have a DHCP and DNS server running on the same network. When i first set up one of my 2510 switches, I gave it a static IP through the console and then went to the web interface to continue my configuration. Later, I realized that I assigned it the wrong IP address, so i went through the web interface and changed the IP address to the correct one. Now, I can't access the web interface. I can telnet to the switch on the new IP address, but the web interface will not load. If I switch from static IP to DHCP, it loads the web interface. Any ideas on what could be causing the web server in the 2510 not to load with the new static IP address?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >