Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 428/2072 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Video Recording Not Working in ICS

    - by Nirav Ranpara
    I have implement code Record video in Android Phone . This code is working in 2.2 , 2.3 . not in ICS But when I checked in ICS code is not working ? here I posted code and xml file. videorecord.java import java.io.File; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.SharedPreferences; import android.hardware.Camera; import android.media.CamcorderProfile; import android.media.MediaRecorder; import android.os.Bundle; import android.os.CountDownTimer; import android.os.Environment; import android.util.Log; import android.view.Display; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.Toast; public class videorecord extends Activity{ SharedPreferences.Editor pre; String filename; CountDownTimer t; private Camera myCamera; private MyCameraSurfaceView myCameraSurfaceView; private MediaRecorder mediaRecorder; Integer cnt=0; LinearLayout myButton; TextView myButton1; SurfaceHolder surfaceHolder; boolean recording; private TextView txtcount; private ImageView btnplay; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); recording = false; setContentView(R.layout.videorecord); init(); myCamera = getCameraInstance(); if(myCamera == null){ } myCameraSurfaceView = new MyCameraSurfaceView(this, myCamera); FrameLayout myCameraPreview = (FrameLayout)findViewById(R.id.videoview); Display display = getWindowManager().getDefaultDisplay(); int width = display.getWidth(); int height = display.getHeight(); myCameraSurfaceView.setLayoutParams(new LinearLayout.LayoutParams(width, height-60)); myCameraPreview.addView(myCameraSurfaceView); myButton = (LinearLayout)findViewById(R.id.mybutton); btnplay.setOnClickListener(myButtonOnClickListener); } private void init() { txtcount = (TextView) findViewById(R.id.txtcounter); //myButton1 = (TextView) findViewById(R.id.mybutton1); btnplay = (ImageView)findViewById(R.id.btnplay); t = new CountDownTimer( Long.MAX_VALUE , 1000) { @Override public void onTick(long millisUntilFinished) { cnt++; String time = new Integer(cnt).toString(); long millis = cnt; int seconds = (int) (millis / 60); int minutes = seconds / 60; seconds = seconds % 60; txtcount.setText(String.format("%d:%02d:%02d", minutes, seconds,millis)); } @Override public void onFinish() { } }; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if ((keyCode == KeyEvent.KEYCODE_BACK)) { if(recording) { new AlertDialog.Builder(videorecord.this).setTitle("Do you want to save Video ?") .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { filename(); //finish(); } }).setNegativeButton("Cancle", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub } }).show(); } else { if ((keyCode == KeyEvent.KEYCODE_BACK)) { //Intent homeIntent= new Intent(Intent.ACTION_MAIN); //homeIntent.addCategory(Intent.CATEGORY_HOME); //homeIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); //startActivity(homeIntent); //this.finishActivity(1); finish(); } //moveTaskToBack(true); // finish(); return super.onKeyDown(keyCode, event); } } else { // Toast.makeText(getApplicationContext(), "asd", Toast.LENGTH_LONG).show(); android.os.Process.killProcess(android.os.Process.myPid()) ; } return super.onKeyDown(keyCode, event); } ImageView.OnClickListener myButtonOnClickListener = new ImageView.OnClickListener(){ public void onClick(View v) { if(recording){ Log.e("Record error", "error in recording ."); mediaRecorder.stop(); t.cancel(); filename(); releaseMediaRecorder(); }else{ releaseCamera(); Log.e("Record Stop error", "error in recording ."); // if(!prepareMediaRecorder()){ prepareMediaRecorder(); finish(); } mediaRecorder.start(); recording = true; // myButton1.setText("STOP Recording"); // btnplay.setImageResource(android.R.drawable.ic_media_pause); btnplay.setImageResource(R.drawable.stoprec); t.start(); } }}; private Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); } catch (Exception e){ } return c; } private void filename() { AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Save Video"); alert.setMessage("Enter File Name"); final EditText input = new EditText(this); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(input.getText().length()>=1) { filename = input.getText().toString(); File sdcard = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); File from = new File(sdcard,"null.mp4"); File to = new File(sdcard,filename+".mp4"); from.renameTo(to); SharedPreferences sp = videorecord.this.getSharedPreferences("data", MODE_WORLD_WRITEABLE); pre = sp.edit(); pre.clear(); pre.commit(); pre.putString("lastvideo", filename+".mp4"); pre.commit(); //btnplay.setImageResource(android.R.drawable.ic_media_play); btnplay.setImageResource(R.drawable.startrec); // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); Intent myIntent = new Intent(getApplicationContext(), StopVidoWatch_Activity.class).setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(myIntent); } else { filename(); } } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); File file = new File(Environment.getExternalStorageDirectory() + "/VideoRecord/null.mp4"); //boolean deleted = file.delete(); file.delete(); finish(); } }); alert.show(); } private boolean prepareMediaRecorder(){ myCamera = getCameraInstance(); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH)); File folder = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); } if (!success) { } else { } mediaRecorder.setOutputFile("/sdcard/VideoRecord/"+filename+".mp4"); mediaRecorder.setMaxDuration(60000); mediaRecorder.setMaxFileSize(5000000); Display display = getWindowManager().getDefaultDisplay(); int width = display.getHeight(); int height = display.getWidth(); String s = new String(); s= s.valueOf(width); String s1 = new String(); s1= s1.valueOf(height); // Toast.makeText(videorecord.this, "Width : " + s , Toast.LENGTH_LONG).show(); // Toast.makeText(videorecord.this, "Height : " + s1 , Toast.LENGTH_LONG).show(); mediaRecorder.setVideoSize(height, width); mediaRecorder.setPreviewDisplay(myCameraSurfaceView.getHolder().getSurface()); try { mediaRecorder.prepare(); } catch (IllegalStateException e) { releaseMediaRecorder(); return false; } catch (IOException e) { releaseMediaRecorder(); return false; } return true; } @Override protected void onPause() { super.onPause(); releaseMediaRecorder(); releaseCamera(); } private void releaseMediaRecorder() { if (mediaRecorder != null) { mediaRecorder.reset(); mediaRecorder.release(); mediaRecorder = null; myCamera.lock(); } } private void releaseCamera(){ if (myCamera != null){ myCamera.release(); myCamera = null; } } public class MyCameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback{ private SurfaceHolder mHolder; private Camera mCamera; public MyCameraSurfaceView(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceChanged(SurfaceHolder holder, int format, int weight, int height) { if (mHolder.getSurface() == null){ return; } try { mCamera.stopPreview(); } catch (Exception e){ } try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ } } public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { } } public void surfaceDestroyed(SurfaceHolder holder) { } } } videorecord.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/videoview" android:layout_width="fill_parent" android:layout_height="fill_parent"></FrameLayout> <LinearLayout android:id="@+id/mybutton" android:layout_width="fill_parent" android:layout_marginBottom="0dip" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="0" > <!-- <TextView android:text="START Recording" android:id="@+id/mybutton1" android:layout_height="wrap_content" android:layout_width="wrap_content" style="@style/savestyle" android:layout_weight="1" android:gravity="left" > </TextView> --> <ImageView android:layout_height="wrap_content" android:id="@+id/btnplay" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" android:layout_width="wrap_content" android:src="@drawable/startrec" /> </LinearLayout> <TextView android:text="00:00:00" android:id="@+id/txtcounter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|bottom" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" /> </FrameLayout> <RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/bgcolor" > <LinearLayout android:layout_above="@+id/mybutton" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout> </RelativeLayout> </LinearLayout>

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Information Indepth Newsletter - Linux Edition

    - by Paulo Folgado
    INFORMATION INDEPTH NEWSLETTERLinux Edition February 2011 Stay Connected:  NEWS Now Available: Oracle Linux 6 Get the latest release of Oracle Linux 6, which includes Unbreakable Enterprise Kernel.Download Oracle Linux 6 Read More Customers Succeed by Using Oracle Exadata with Oracle Linux Watch IT executives from Bank of America, Linkshare, and Johns Hopkins as they talk about the business challenges they faced and why they chose to use Oracle Linux along with Oracle Exadata as the solution. Watch Now Video Interview: Oracle Senior Vice President Wim Coekaerts Watch Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, as he talks about use cases for Oracle VM Templates as well as the Unbreakable Enterprise Kernel for Linux.Watch Now Hot Off the Press: Migrate Your IBM AIX Environment to Oracle Linux This new white paper provides recommendations for planning and implementing the migration of applications from an IBM Power System running AIX to Oracle's Sun Fire X4800 Server with Intel Xeon 7560 Processor running Oracle Linux 5.5.Read More  Back to Top BLOGOSPHERE Just Launched: The Oracle Linux Blog Follow our new Oracle Linux blog  to hear the latest updates, product news, upcoming events, and all the latest happenings, directly from the Linux team at Oracle. Back to Top TECH DIVE NEW: Linux/Oracle Solaris CommandComparo Site from Oracle Technology NetworkThis site gives equivalent command syntax in Oracle Solaris 10 and Oracle Enterprise Linux 5 for common administrative tasks--focusing particularly on tasks that have tricky syntax or that you frequently need to double check. It acts as a quick reference for administrators who operate in these two OS environments. Free Download: Oracle Linux Release 5.6Did you know that by using Oracle Linux 5.5 or 5.6 along with the Unbreakable Enterprise Kernel, you can get all the benefits of Linux mainline kernel 2.6.32 and more, right now, without the need to reinstall or migrate to a new operating system such as RHEL6?Read Release NotesDownload Oracle Linux 5.6 LSB 4.0 Certification Completed for Oracle Linux 5.5Oracle Linux 5.5 with Unbreakable Enterprise Kernel successfully completed the LSB 4.0 certification.  Back to Top WEBCASTS Boost Your Linux Performance with Oracle's Enhancements in Infiniband and RDSRegister to hear Director of Kernel Engineering Chris Mason cover scalability and performance improvements in Linux environment. Get the Facts Oracle's Unbreakable Enterprise KernelSVP Wim Coekaerts and Senior Director Monica Kumar cover the facts about and benefits of using Unbreakable Enterprise Kernel.  View Other Webcasts on Demand   Back to Top EVENTS Collaborate 2011April 10-14 Orlando, Florida Cloud Summit Events, WorldwideVarious dates (check the city for date/time of event) Datacenter Efficiency Events WorldwideThese events include Linux and Oracle VM sessions.Various dates (check the city for date/time of event) Virtualization Events in North America Find an Oracle Event  Back to Top EDUCATION Get Oracle Linux Certified from Oracle University Oracle University offers courses in both Oracle Linux and the administration of Oracle Database on Linux.  Back to Top CUSTOMER SPOTLIGHT Pella Corporation Improves IT Performance and Efficiency with Oracle Linux and Oracle VM To improve IT performance and efficiency and lower operational costs, Pella Corporation, has standardized on Oracle VM and Oracle Linux. Read More Disney Store Deploys POS in 330 Stores and 7 Countries on Oracle Linux Disney Store is running 1,500 registers worldwide on a broad Oracle technology software stack including Oracle Database 11g, Oracle Fusion Middleware, and Oracle Linux. Read More Back to Top PARTNER SPOTLIGHT Emulex and Oracle Announce Data Integrity Features The Unbreakable Enterprise Kernel provides data integrity checking between Oracle Database applications and Emulex 8Gb/s LightPulse Fibre Channel Host Bus Adapters. Read More Dell Inc. Dell Inc. tested and validated configurations support Oracle Linux. Back to Top STAY IN TOUCH Follow @ORCL_Linux on Twitter for the latest penguin tweets Bookmark Oracle.com/Linux Read the Oracle Linux blog Back to Top  Oracle Information InDepth newsletters bring targeted news, articles, customer stories, and special offers to business people who want to find out how to streamline enterprise information management, measure results, improve business processes, and communicate a single truth to their constituents. Please send questions or comments to [email protected]. For answers to questions about subscribing, unsubscribing, and managing your Oracle e-mail communications preferences, please see the Oracle E-Mail Communications page. Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor is it subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. 

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • When Your Boss Doesn't Want you to Succeed

    - by Phil Factor
    You're working hard to get an application finished. You are programming long into the evenings sometimes, and eating sandwiches at your desk instead of taking a lunch break. Then one day you glance up at the IT manager, serene in his mysterious round of meetings, and think 'Does he actually care whether this project succeeds or not?'. The question may seem absurd. Of course the project must succeed. The truth, as always, is often far more complex. Your manager may even be doing his best to make sure you don't succeed. Why? There have always been rich pickings for the unscrupulous in IT.  In extreme cases, where administrators struggle with scarcely-comprehended technical issues, huge sums of money can be lost and gained without any perceptible results. In a very few cases can fraud be proven: most of the time, the intricacies of the 'game' are such that one can do little more than harbor suspicion.  Where does over-enthusiastic salesmanship end and fraud begin? The Business of Information Technology provides rich opportunities for White-collar crime. The poor developer has his, or her, hands full with the task of wrestling with the sheer complexity of building an application. He, or she, has no time for following the complexities of the chicanery of the management that is directing affairs.  Most likely, the developers wouldn't even suspect that their company management had ulterior motives. I'll illustrate what I mean with an entirely fictional, hypothetical, example. The Opportunist and the Aged Charities often do good, unexciting work that is funded by the income from a bequest that dates back maybe hundreds of years.  In our example, it isn't exciting work, for it involves the welfare of elderly people who have fallen on hard times.  Volunteers visit, giving a smile and a chat, and check that they are all right, but are able to spend a little money on their discretion to ameliorate any pressing needs for these old folk.  The money is made to work very hard and the charity averts a great deal of suffering and eases the burden on the state. Daisy hears the garden gate creak as Mrs Rainer comes up the path. She looks forward to her twice-weekly visit from the nice lady from the trust. She always asked ‘is everything all right, Love’. Cheeky but nice. She likes her cheery manner. She seems interested in hearing her memories, and talking about her far-away family. She helps her with those chores in the house that she couldn’t manage and once even paid to fill the back-shed with coke, the other year. Nice, Mrs. Rainer is, she thought as she goes to open the door. The trustees are getting on in years themselves, and worry about the long-term future of the charity: is it relevant to modern society? Is it likely to attract a new generation of workers to take it on. They are instantly attracted by the arrival to the board of a smartly dressed University lecturer with the ear of the present Government. Alain 'Stalin' Jones is earnest, persuasive and energetic. The trustees welcome him to the board and quickly forgive his humorless political-correctness. He talks of 'diversity', 'relevance', 'social change', 'equality' and 'communities', but his eye is on that huge bequest. Alain first came to notice as a Trotskyite union official, who insinuated himself into one of the duller Trades Unions and turned it, through his passionate leadership, into a radical, headline-grabbing organization.  Middle age, and the rise of European federal socialism, had brought him quiet prosperity and charcoal suits, an ear in the current government, and a wide influence as a member of various Quangos (government bodies staffed by well-paid unelected courtiers).  He was employed as a 'consultant' by several organizations that relied on government contracts. After gaining the confidence of the trustees, and showing a surprising knowledge of mundane processes and the regulatory framework of charities, Alain launches his plan.  The trust will expand their work by means of a bold IT initiative that will coordinate the interventions of several 'caring agencies', and provide  emergency cover, a special Website so anxious relatives can see how their elderly charges are doing, and a vastly more efficient way of coordinating the work of the volunteer carers. It will also provide a special-purpose site that gives 'social networking' facilities, rather like Facebook, to the few elderly folk on the lists with access to the internet. The trustees perk up. Their own experience of the internet is restricted to the occasional scanning of railway timetables, but they can see that it is 'relevant'. In his next report to the other trustees, Alain proudly announces that all this glamorous and exciting technology can be paid for by a grant from the government. He admits darkly that he has influence. True to his word, the government promises a grant of a size that is an order of magnitude greater than any budget that the trustees had ever handled. There was the understandable proviso that the company that would actually do the IT work would have to be one of the government's preferred suppliers and the work would need to be tendered under EU competition rules. The only company that tenders, a multinational IT company with a long track record of government work, quotes ten million pounds for the work. A trustee questions the figure as it seems enormous for the reasonably trivial internet facilities being built, but the IT Salesmen dazzle them with presentations and three-letter acronyms until they subside into quiescent acceptance. After all, they can’t stay locked in the Twentieth century practices can they? The work is put in hand with a large project team, in a splendid glass building near west London. The trustees see rooms of programmers working diligently at screens, and who talk with enthusiasm of the project. Paul, the project manager, looked through his resource schedule with growing unease. His initial excitement at being given his first major project hadn’t lasted. He’d been allocated a lackluster team of developers whose skills didn’t seem right, and he was allowed only a couple of contractors to make good the deficit. Strangely, the presentation he’d given to his management, where he’d saved time and resources with a OTS solution to a great deal of the development work, and a sound conservative architecture, hadn’t gone down nearly as big as he’d hoped. He almost got the feeling they wanted a more radical and ambitious solution. The project starts slipping its dates. The costs build rapidly. There are certain uncomfortable extra charges that appear, such as the £600-a-day charge by the 'Business Manager' appointed to act as a point of liaison between the charity and the IT Company.  When he appeared, his face permanently split by a 'Mr Sincerity' smile, they'd thought he was provided at the cost of the IT Company. Derek, the DBA, didn’t have to go to the server room quite some much as he did: but It got him away from the poisonous despair of the development group. Wave after wave of events had conspired to delay the project.  Why the management had imposed hideous extra bureaucracy to cover ISO 9000 and 9001:2008 accreditation just as the project was struggling to get back on-schedule was  beyond belief.  Then  the Business manager was coming back with endless changes in scope, sorrowing saying that the Trustees were very insistent, though hopelessly out in touch with the reality of technical challenges. Suddenly, the costs mount to the point of consuming the government grant in its entirety. The project remains tantalizingly just out of reach. Alain Jones gives an emotional rallying speech at the trustees review meeting, urging them not to lose their nerve. Sadly, the trustees dip into the accumulated capital of the trust, the seed-corn of all their revenues, in order to save the IT project. A few months later it is all over. The IT project is never delivered, even though it had seemed so incredibly close.  With the trust's capital all gone, the activities it funded have to be terminated and the trust becomes just a shell. There aren't even the funds to mount a legal challenge against the IT company, even had the trust's solicitor advised such a foolish thing. Alain leaves as suddenly as he had arrived, only to pop up a few months later, bronzed and rested, at another charity. The IT workers who were permanent employees are dispersed to other projects, and the contractors leave to other contracts. Within months the entire project is but a vague memory. One or two developers remain  puzzled that their managers had been so obstructive when they should have welcomed progress toward completion of the project, but they put it down to incompetence and testosterone. Few suspected that they were actively preventing the project from getting finished. The relationships between the IT consultancy, and the government of the day are intricate, and made more complex by the Private Finance initiatives and political patronage.  The losers in this case were the taxpayers, and the beneficiaries of the trust, and, perhaps the soul of the original benefactor of the trust, whose bid to give his name some immortality had been scuppered by smooth-talking white-collar political apparatniks.  Even now, nobody is certain whether a crime was ever committed. The perfect heist, I guess. Where’s the victim? "I hear that Daisy’s cottage is up for sale. She’s had to go into a care home.  She didn’t want to at all, but then there is nobody to keep an eye on her since she had that minor stroke a while back.  A charity used to help out. The ‘social’ don’t have the funding, evidently for community care. Yes, her old cat was put down. There was a good clearout, and now the house is all scrubbed and cleared ready for sale. The skip was full of old photos and letters, memories. No room in her new ‘home’."

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HPET for x86 BSP (how to build it for WCE8)

    - by Werner Willemsens
    Originally posted on: http://geekswithblogs.net/WernerWillemsens/archive/2014/08/02/157895.aspx"I needed a timer". That is how we started a few blogs ago our series about APIC and ACPI. Well, here it is. HPET (High Precision Event Timer) was introduced by Intel in early 2000 to: Replace old style Intel 8253 (1981!) and 8254 timers Support more accurate timers that could be used for multimedia purposes. Hence Microsoft and Intel sometimes refers to HPET as Multimedia timers. An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit wide. The HPET is discoverable via ACPI. The HPET circuit in recent Intel platforms is integrated into the SouthBridge chip (e.g. 82801) All HPET timers should support one-shot interrupt programming, while optionally they can support periodic interrupts. In most Intel SouthBridges I worked with, there are three HPET timers. TIMER0 supports both one-shot and periodic mode, while TIMER1 and TIMER2 are one-shot only. Each HPET timer can generate interrupts, both in old-style PIC mode and in APIC mode. However in PIC mode, interrupts cannot freely be chosen. Typically IRQ11 is available and cannot be shared with any other interrupt! Which makes the HPET in PIC mode virtually unusable. In APIC mode however more IRQs are available and can be shared with other interrupt generating devices. (Check the datasheet of your SouthBridge) Because of this higher level of freedom, I created the APIC BSP (see previous posts). The HPET driver code that I present you here uses this APIC mode. Hpet.reg [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\Hpet] "Dll"="Hpet.dll" "Prefix"="HPT" "Order"=dword:10 "IsrDll"="giisr.dll" "IsrHandler"="ISRHandler" "Priority256"=dword:50 Because HPET does not reside on the PCI bus, but can be found through ACPI as a memory mapped device, you don't need to specify the "Class", "SubClass", "ProgIF" and other PCI related registry keys that you typically find for PCI devices. If a driver needs to run its internal thread(s) at a certain priority level, by convention in Windows CE you add the "Priority256" registry key. Through this key you can easily play with the driver's thread priority for better response and timer accuracy. See later. Hpet.cpp (Hpet.dll) This cpp file contains the complete HPET driver code. The file is part of a folder that you typically integrate in your BSP (\src\drivers\Hpet). It is written as sample (example) code, you most likely want to change this code to your specific needs. There are two sets of #define's that I use to control how the driver works. _TRIGGER_EVENT or _TRIGGER_SEMAPHORE: _TRIGGER_EVENT will let your driver trigger a Windows CE Event when the timer expires, _TRIGGER_SEMAPHORE will trigger a Windows CE counting Semaphore. The latter guarantees that no events get lost in case your application cannot always process the triggers fast enough. _TIMER0 or _TIMER2: both timers will trigger an event or semaphore periodically. _TIMER0 will use a periodic HPET timer interrupt, while _TIMER2 will reprogram a one-shot HPET timer after each interrupt. The one-shot approach is interesting if the frequency you wish to generate is not an even multiple of the HPET main counter frequency. The sample code uses an algorithm to generate a more correct frequency over a longer period (by reducing rounding errors). _TIMER1 is not used in the sample source code. HPT_Init() will locate the HPET I/O memory space, setup the HPET counter (_TIMER0 or _TIMER2) and install the Interrupt Service Thread (IST). Upon timer expiration, the IST will run and on its turn will generate a Windows CE Event or Semaphore. In case of _TIMER2 a new one-shot comparator value is calculated and set for the timer. The IRQ of the HPET timers are programmed to IRQ22, but you can choose typically from 20-23. The TIMERn_INT_ROUT_CAP bits in the TIMn_CONF register will tell you what IRQs you can choose from. HPT_IOControl() can be used to set a new HPET counter frequency (actually you configure the counter timeout value in microseconds), start and stop the timer, and request the current HPET counter value. The latter is interesting because the Windows CE QueryPerformanceCounter() and QueryPerformanceFrequency() APIs implement the same functionality, albeit based on other counter implementations. HpetDrvIst() contains the IST code. DWORD WINAPI HpetDrvIst(LPVOID lpArg) { psHpetDeviceContext pHwContext = (psHpetDeviceContext)lpArg; DWORD mainCount = READDWORD(pHwContext->g_hpet_va, GenCapIDReg + 4); // Main Counter Tick period (fempto sec 10E-15) DWORD i = 0; while (1) { WaitForSingleObject(pHwContext->g_isrEvent, INFINITE); #if defined(_TRIGGER_SEMAPHORE) LONG p = 0; BOOL b = ReleaseSemaphore(pHwContext->g_triggerEvent, 1, &p); #elif defined(_TRIGGER_EVENT) BOOL b = SetEvent(pHwContext->g_triggerEvent); #else #pragma error("Unknown TRIGGER") #endif #if defined(_TIMER0) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD comparator = READDWORD(pHwContext->g_hpet_va, Tim0_ComparatorReg + 0); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 0); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 0 %06d %08X %08X", pHwContext->g_id, i++, currentCount, comparator)); #elif defined(_TIMER2) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD previousComparator = READDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0); pHwContext->g_counter2.QuadPart += pHwContext->g_comparator.QuadPart; // increment virtual counter (higher accuracy) DWORD comparator = (DWORD)(pHwContext->g_counter2.QuadPart >> 8); // "round" to real value WRITEDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0, comparator); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 2); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 2 %06d %08X %08X (%08X)", pHwContext->g_id, i++, currentCount, comparator, comparator - previousComparator)); #else #pragma error("Unknown TIMER") #endif } return 1; } The following figure shows how the HPET hardware interrupt via ISR -> IST is translated in a Windows CE Event or Semaphore by the HPET driver. The Event or Semaphore can be used to trigger a Windows CE application. HpetTest.cpp (HpetTest.exe)This cpp file contains sample source how to use the HPET driver from an application. The file is part of a separate (smart device) VS2013 solution. It contains code to measure the generated Event/Semaphore times by means of GetSystemTime() and QueryPerformanceCounter() and QueryPerformanceFrequency() APIs. HPET evaluation If you scan the internet about HPET, you'll find many remarks about buggy HPET implementations and bad performance. Unfortunately that is true. I tested the HPET driver on an Intel ICH7M SBC (release date 2008). When a HPET timer expires on the ICH7M, an interrupt indeed is generated, but right after you clear the interrupt, a few more unwanted interrupts (too soon!) occur as well. I tested and debugged it for a loooong time, but I couldn't get it to work. I concluded ICH7M's HPET is buggy Intel hardware. I tested the HPET driver successfully on a more recent NM10 SBC (release date 2013). With the NM10 chipset however, I am not fully convinced about the timer's frequency accuracy. In the long run - on average - all is fine, but occasionally I experienced upto 20 microseconds delays (which were immediately compensated on the next interrupt). Of course, this was all measured by software, but I still experienced the occasional delay when both the HPET driver IST thread as the application thread ran at CeSetThreadPriority(1). If it is not the hardware, only the kernel can cause this delay. But Windows CE is an RTOS and I have never experienced such long delays with previous versions of Windows CE. I tested and developed this on WCE8, I am not heavily experienced with it yet. Internet forum threads however mention inaccurate HPET timer implementations as well. At this moment I haven't figured out what is going on here. Useful references: http://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/software-developers-hpet-spec-1-0a.pdf http://en.wikipedia.org/wiki/High_Precision_Event_Timer http://wiki.osdev.org/HPET Windows CE BSP source file package for HPET in MyBsp Note that this source code is "As Is". It is still under development and I cannot (and never will) guarantee the correctness of the code. Use it as a guide for your own HPET integration.

    Read the article

  • C#: My World Clock

    - by Bruce Eitman
    [Placeholder:  I will post the entire project soon] I have been working on cleaning my office of 8 years of stuff from several engineers working on many projects.  It turns out that we have a few extra single board computers with displays, so at the end of the day last Friday I though why not create a little application to display the time, you know, a clock.  How difficult could that be?  It turns out that it is quite simple – until I decided to gold plate the project by adding time displays for our offices around the world. I decided to use C#, which actually made creating the main clock quite easy.   The application was simply a text box and a timer.  I set the timer to fire a couple of times a second, and when it does use a DateTime object to get the current time and retrieve a string to display. And I could have been done, but of course that gold plating came up.   Seems simple enough, simply offset the time from the local time to the location that I want the time for and display it.    Sure enough, I had the time displayed for UK, Italy, Kansas City, Japan and China in no time at all. But it is October, and for those of us still stuck with Daylight Savings Time, we know that the clocks are about to change.   My first attempt was to simply check to see if the local time was DST or Standard time, then change the offset for China.  China doesn’t have Daylight Savings Time. If you know anything about the time changes around the world, you already know that my plan is flawed – in a big way.   It turns out that the transitions in and out of DST take place at different times around the world.   If you didn’t know that, do a quick search for “Daylight Savings” and you will find many WEB sites dedicated to tracking the time changes dates, and times. Now the real challenge of this application; how do I programmatically find out when the time changes occur and handle them correctly?  After a considerable amount of research it turns out that the solution is to read the data from the registry and parse it to figure out when the time changes occur. Reading Time Change Information from the Registry Reading the data from the registry is simple, using the data is a little more complicated.  First, reading from the registry can be done like:             byte[] binarydata = (byte[])Registry.GetValue("HKEY_LOCAL_MACHINE\\Time Zones\\Eastern Standard Time", "TZI", null);   Where I have hardcoded the registry key for example purposes, but in the end I will use some variables.   We now have a binary blob with the data, but it needs to be converted to use the real data.   To start we will need a couple of structs to hold the data and make it usable.   We will need a SYSTEMTIME and REG_TZI_FORMAT.   You may have expected that we would need a TIME_ZONE_INFORMATION struct, but we don’t.   The data is stored in the registry as a REG_TZI_FORMAT, which excludes some of the values found in TIME_ZONE_INFORMATION.     struct SYSTEMTIME     {         internal short wYear;         internal short wMonth;         internal short wDayOfWeek;         internal short wDay;         internal short wHour;         internal short wMinute;         internal short wSecond;         internal short wMilliseconds;     }       struct REG_TZI_FORMAT     {         internal long Bias;         internal long StdBias;         internal long DSTBias;         internal SYSTEMTIME StandardStart;         internal SYSTEMTIME DSTStart;     }   Now we need to convert the binary blob to a REG_TZI_FORMAT.   To do that I created the following helper functions:         private void BinaryToSystemTime(ref SYSTEMTIME ST, byte[] binary, int offset)         {             ST.wYear = (short)(binary[offset + 0] + (binary[offset + 1] << 8));             ST.wMonth = (short)(binary[offset + 2] + (binary[offset + 3] << 8));             ST.wDayOfWeek = (short)(binary[offset + 4] + (binary[offset + 5] << 8));             ST.wDay = (short)(binary[offset + 6] + (binary[offset + 7] << 8));             ST.wHour = (short)(binary[offset + 8] + (binary[offset + 9] << 8));             ST.wMinute = (short)(binary[offset + 10] + (binary[offset + 11] << 8));             ST.wSecond = (short)(binary[offset + 12] + (binary[offset + 13] << 8));             ST.wMilliseconds = (short)(binary[offset + 14] + (binary[offset + 15] << 8));         }             private REG_TZI_FORMAT ConvertFromBinary(byte[] binarydata)         {             REG_TZI_FORMAT RTZ = new REG_TZI_FORMAT();               RTZ.Bias = binarydata[0] + (binarydata[1] << 8) + (binarydata[2] << 16) + (binarydata[3] << 24);             RTZ.StdBias = binarydata[4] + (binarydata[5] << 8) + (binarydata[6] << 16) + (binarydata[7] << 24);             RTZ.DSTBias = binarydata[8] + (binarydata[9] << 8) + (binarydata[10] << 16) + (binarydata[11] << 24);             BinaryToSystemTime(ref RTZ.StandardStart, binarydata, 4 + 4 + 4);             BinaryToSystemTime(ref RTZ.DSTStart, binarydata, 4 + 16 + 4 + 4);               return RTZ;         }   I am the first to admit that there may be a better way to get the settings from the registry and into the REG_TXI_FORMAT, but I am not a great C# programmer which I have said before on this blog.   So sometimes I chose brute force over elegant. Now that we have the Bias information and the start date information, we can start to make sense of it.   The bias is an offset, in minutes, from local time (if already in local time for the time zone in question) to get to UTC – or as Microsoft defines it: UTC = local time + bias.  Standard bias is an offset to adjust for standard time, which I think is usually zero.   And DST bias is and offset to adjust for daylight savings time. Since we don’t have the local time for a time zone other than the one that the computer is set to, what we first need to do is convert local time to UTC, which is simple enough using:                 DateTime.Now.ToUniversalTime(); Then, since we have UTC we need to do a little math to alter the formula to: local time = UTC – bias.  In other words, we need to subtract the bias minutes. I am ahead of myself though, the standard and DST start dates really aren’t dates.   Instead they indicate the month, day of week and week number of the time change.   The dDay member of SYSTEM time will be set to the week number of the date change indicating that the change happens on the first, second… day of week of the month.  So we need to convert them to dates so that we can determine which bias to use, and when to change to a different bias.   To do that, I wrote the following function:         private DateTime SystemTimeToDateTimeStart(SYSTEMTIME Time, int Year)         {             DayOfWeek[] Days = { DayOfWeek.Sunday, DayOfWeek.Monday, DayOfWeek.Tuesday, DayOfWeek.Wednesday, DayOfWeek.Thursday, DayOfWeek.Friday, DayOfWeek.Saturday };             DateTime InfoTime = new DateTime(Year, Time.wMonth, Time.wDay == 1 ? 1 : ((Time.wDay - 1) * 7) + 1, Time.wHour, Time.wMinute, Time.wSecond, DateTimeKind.Utc);             DateTime BestGuess = InfoTime;             while (BestGuess.DayOfWeek != Days[Time.wDayOfWeek])             {                 BestGuess = BestGuess.AddDays(1);             }             return BestGuess;         }   SystemTimeToDateTimeStart gets two parameters; a SYSTEMTIME and a year.   The reason is that we will try this year and next year because we are interested in start dates that are in the future, not the past.  The function starts by getting a new Datetime with the first possible date and then looking for the correct date. Using the start dates, we can then determine the correct bias to use, and the next date that time will change:             NextTimeChange = StandardChange;             CurrentBias = TimezoneSettings.Bias + TimezoneSettings.DSTBias;             if (DSTChange.Year != 1 && StandardChange.Year != 1)             {                 if (DSTChange.CompareTo(StandardChange) < 0)                 {                     NextTimeChange = DSTChange;                     CurrentBias = TimezoneSettings.StdBias + TimezoneSettings.Bias;                 }             }             else             {                 // I don't like this, but it turns out that China Standard Time                 // has a DSTBias of -60 on every Windows system that I tested.                 // So, if no DST transitions, then just use the Bias without                 // any offset                 CurrentBias = TimezoneSettings.Bias;             }   Note that some time zones do not change time, in which case the years will remain set to 1.   Further, I found that the registry settings are actually wrong in that the DST Bias is set to -60 for China even though there is not DST in China, so I ignore the standard and DST bias for those time zones. There is one thing that I have not solved, and don’t plan to solve.  If the time zone for this computer changes, this application will not update the clock using the new time zone.  I tell  you this because you may need to deal with it – I do not because I won’t let the user get to the control panel applet to change the timezone. Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

  • Create Orchard Module in a Separate Project

    - by Steve Michelotti
    The Orchard Project is a new OOS Microsoft project that is being developed up on CodePlex. From the Orchard home page on CodePlex, it states “Orchard project is focused on delivering a .NET-based CMS application that will allow users to rapidly create content-driven Websites, and an extensibility framework that will allow developers and customizers to provide additional functionality through modules and themes.” The Orchard Project site contains additional information including documentation and walkthroughs. The ability to create a composite solution based on a collection of modules is a compelling feature. In Orchard, these modules can just be created as simple MVC Areas or they can also be created inside of stand-alone web application projects.  The walkthrough for writing an Orchard module that is available on the Orchard site uses a simple Area that is created inside of the host application. It is based on the Orchard MIX presentation. This walkthrough does an effective job introducing various Orchard concepts such as hooking into the navigation system, theme/layout system, content types, and more.  However, creating an Orchard module in a separate project does not seem to be concisely documented anywhere. Orchard ships with several module OOTB that are in separate assemblies – but again, it’s not well documented how to get started building one from scratch. The following are the steps I took to successfully get an Orchard module in a separate project up and running. Step 1 – Download the OrchardIIS.zip file from the Orchard Release page. Unzip and open up the solution. Step 2 – Add your project to the solution. I named my project “Orchard.Widget” and used and “MVC 2 Empty Web Application” project type. Make sure you put the physical path inside the “Modules” sub-folder to the main project like this: At this point the solution should look like: Step 3 – Add assembly references to Orchard.dll and Orchard.Core.dll. Step 4 – Add a controller and view.  I’ll just create a Hello World controller and view. Notice I created the view as a partial view (*.ascx). Also add the [Themed] attribute to the top of the HomeController class just like the normal Orchard walk through shows it. Step 5 – Add Module.txt to the project root. The is a very important step. Orchard will not recognize your module without this text file present.  It can contain just the name of your module: name: Widget Step 6 – Add Routes.cs. Notice I’ve given an area name of “Orchard.Widget” on lines 26 and 33. 1: using System; 2: using System.Collections.Generic; 3: using System.Web.Mvc; 4: using System.Web.Routing; 5: using Orchard.Mvc.Routes; 6:   7: namespace Orchard.Widget 8: { 9: public class Routes : IRouteProvider 10: { 11: public void GetRoutes(ICollection<RouteDescriptor> routes) 12: { 13: foreach (var routeDescriptor in GetRoutes()) 14: { 15: routes.Add(routeDescriptor); 16: } 17: } 18:   19: public IEnumerable<RouteDescriptor> GetRoutes() 20: { 21: return new[] { 22: new RouteDescriptor { 23: Route = new Route( 24: "Widget/{controller}/{action}/{id}", 25: new RouteValueDictionary { 26: {"area", "Orchard.Widget"}, 27: {"controller", "Home"}, 28: {"action", "Index"}, 29: {"id", ""} 30: }, 31: new RouteValueDictionary(), 32: new RouteValueDictionary { 33: {"area", "Orchard.Widget"} 34: }, 35: new MvcRouteHandler()) 36: } 37: }; 38: } 39: } 40: } Step 7 – Add MainMenu.cs. This will make sure that an item appears in the main menu called “Widget” which points to the module. 1: using System; 2: using Orchard.UI.Navigation; 3:   4: namespace Orchard.Widget 5: { 6: public class MainMenu : INavigationProvider 7: { 8: public void GetNavigation(NavigationBuilder builder) 9: { 10: builder.Add(menu => menu.Add("Widget", item => item.Action("Index", "Home", new 11: { 12: area = "Orchard.Widget" 13: }))); 14: } 15:   16: public string MenuName 17: { 18: get { return "main"; } 19: } 20: } 21: } Step 8 – Clean up web.config. By default Visual Studio adds numerous sections to the web.config. The sections that can be removed are: appSettings, connectionStrings, authentication, membership, profile, and roleManager. Step 9 – Delete Global.asax. This project will ultimately be running from inside the Orchard host so this “sub-site” should not have its own Global.asax.   Now you’re ready the run the app.  When you first run it, the “Widget” menu item will appear in the main menu because of the MainMenu.cs file we added: We can then click the “Widget” link in the main menu to send us over to our view:   Packaging From start to finish, it’s a relatively painless experience but it could be better. For example, a Visual Studio project template that encapsulates aspects from this blog post would definitely make it a lot easier to get up and running with creating an Orchard module.  Another aspect I found interesting is that if you read the first paragraph of the walkthrough, it says, “You can also develop modules as separate projects, to be packaged and shared with other users of Orchard CMS (the packaging story is still to be defined, along with marketplaces for sharing modules).” In particular, I will be extremely curious to see how the “packaging story” evolves. The first thing that comes to mind for me is: what if we explored MvcContrib Portable Areas as a potential mechanism for this packaging? This would certainly make things easy since all artifacts (aspx, aspx, images, css, javascript) are all wrapped up into a single assembly. Granted, Orchard does have its own infrastructure for layouts and themes but it seems like integrating portable areas into this pipeline would not be a difficult undertaking. Maybe that’ll be the next research task. :)

    Read the article

  • C#/.NET &ndash; Finding an Item&rsquo;s Index in IEnumerable&lt;T&gt;

    - by James Michael Hare
    Sorry for the long blogging hiatus.  First it was, of course, the holidays hustle and bustle, then my brother and his wife gave birth to their son, so I’ve been away from my blogging for two weeks. Background: Finding an item’s index in List<T> is easy… Many times in our day to day programming activities, we want to find the index of an item in a collection.  Now, if we have a List<T> and we’re looking for the item itself this is trivial: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // can find the exact item using IndexOf() 5: var pos = list.IndexOf(64); This will return the position of the item if it’s found, or –1 if not.  It’s easy to see how this works for primitive types where equality is well defined.  For complex types, however, it will attempt to compare them using EqualityComparer<T>.Default which, in a nutshell, relies on the object’s Equals() method. So what if we want to search for a condition instead of equality?  That’s also easy in a List<T> with the FindIndex() method: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // finds index of first even number or -1 if not found. 5: var pos = list.FindIndex(i => i % 2 == 0);   Problem: Finding an item’s index in IEnumerable<T> is not so easy... This is all well and good for lists, but what if we want to do the same thing for IEnumerable<T>?  A collection of IEnumerable<T> has no indexing, so there’s no direct method to find an item’s index.  LINQ, as powerful as it is, gives us many tools to get us this information, but not in one step.  As with almost any problem involving collections, there are several ways to accomplish the same goal.  And once again as with almost any problem involving collections, the choice of the solution somewhat depends on the situation. So let’s look at a few possible alternatives.  I’m going to express each of these as extension methods for simplicity and consistency. Solution: The TakeWhile() and Count() combo One of the things you can do is to perform a TakeWhile() on the list as long as your find condition is not true, and then do a Count() of the items it took.  The only downside to this method is that if the item is not in the list, the index will be the full Count() of items, and not –1.  So if you don’t know the size of the list beforehand, this can be confusing. 1: // a collection of extra extension methods off IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item in the collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // note if item not found, result is length and not -1! 8: return list.TakeWhile(i => !finder(i)).Count(); 9: } 10: } Personally, I don’t like switching the paradigm of not found away from –1, so this is one of my least favorites.  Solution: Select with index Many people don’t realize that there is an alternative form of the LINQ Select() method that will provide you an index of the item being selected: 1: list.Select( (item,index) => do something here with the item and/or index... ) This can come in handy, but must be treated with care.  This is because the index provided is only as pertains to the result of previous operations (if any).  For example: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // you'd hope this would give you the indexes of the even numbers 5: // which would be 2, 3, 8, but in reality it gives you 0, 1, 2 6: list.Where(item => item % 2 == 0).Select((item,index) => index); The reason the example gives you the collection { 0, 1, 2 } is because the where clause passes over any items that are odd, and therefore only the even items are given to the select and only they are given indexes. Conversely, we can’t select the index and then test the item in a Where() clause, because then the Where() clause would be operating on the index and not the item! So, what we have to do is to select the item and index and put them together in an anonymous type.  It looks ugly, but it works: 1: // extensions defined on IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // finds an item in a collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // if you don't name the anonymous properties they are the variable names 8: return list.Select((item, index) => new { item, index }) 9: .Where(p => finder(p.item)) 10: .Select(p => p.index + 1) 11: .FirstOrDefault() - 1; 12: } 13: }     So let’s look at this, because i know it’s convoluted: First Select() joins the items and their indexes into an anonymous type. Where() filters that list to only the ones matching the predicate. Second Select() picks the index of the matches and adds 1 – this is to distinguish between not found and first item. FirstOrDefault() returns the first item found from the previous clauses or default (zero) if not found. Subtract one so that not found (zero) will be –1, and first item (one) will be zero. The bad thing is, this is ugly as hell and creates anonymous objects for each item tested until it finds the match.  This concerns me a bit but we’ll defer judgment until compare the relative performances below. Solution: Convert ToList() and use FindIndex() This solution is easy enough.  We know any IEnumerable<T> can be converted to List<T> using the LINQ extension method ToList(), so we can easily convert the collection to a list and then just use the FindIndex() method baked into List<T>. 1: // a collection of extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // find the index of an item in the collection similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: return list.ToList().FindIndex(finder); 8: } 9: } This solution is simplicity itself!  It is very concise and elegant and you need not worry about anyone misinterpreting what it’s trying to do (as opposed to the more convoluted LINQ methods above). But the main thing I’m concerned about here is the performance hit to allocate the List<T> in the ToList() call, but once again we’ll explore that in a second. Solution: Roll your own FindIndex() for IEnumerable<T> Of course, you can always roll your own FindIndex() method for IEnumerable<T>.  It would be a very simple for loop which scans for the item and counts as it goes.  There’s many ways to do this, but one such way might look like: 1: // extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item matching a predicate in the enumeration, much like List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: int index = 0; 8: foreach (var item in list) 9: { 10: if (finder(item)) 11: { 12: return index; 13: } 14:  15: index++; 16: } 17:  18: return -1; 19: } 20: } Well, it’s not quite simplicity, and those less familiar with LINQ may prefer it since it doesn’t include all of the lambdas and behind the scenes iterators that come with deferred execution.  But does having this long, blown out method really gain us much in performance? Comparison of Proposed Solutions So we’ve now seen four solutions, let’s analyze their collective performance.  I took each of the four methods described above and run them over 100,000 iterations of lists of size 10, 100, 1000, and 10000 and here’s the performance results.  Then I looked for targets at the begining of the list (best case), middle of the list (the average case) and not in the list (worst case as must scan all of the list). Each of the times below is the average time in milliseconds for one execution as computer over the 100,000 iterations: Searches Matching First Item (Best Case)   10 100 1000 10000 TakeWhile 0.0003 0.0003 0.0003 0.0003 Select 0.0005 0.0005 0.0005 0.0005 ToList 0.0002 0.0003 0.0013 0.0121 Manual 0.0001 0.0001 0.0001 0.0001   Searches Matching Middle Item (Average Case)   10 100 1000 10000 TakeWhile 0.0004 0.0020 0.0191 0.1889 Select 0.0008 0.0042 0.0387 0.3802 ToList 0.0002 0.0007 0.0057 0.0562 Manual 0.0002 0.0013 0.0129 0.1255   Searches Where Not Found (Worst Case)   10 100 1000 10000 TakeWhile 0.0006 0.0039 0.0381 0.3770 Select 0.0012 0.0081 0.0758 0.7583 ToList 0.0002 0.0012 0.0100 0.0996 Manual 0.0003 0.0026 0.0253 0.2514   Notice something interesting here, you’d think the “roll your own” loop would be the most efficient, but it only wins when the item is first (or very close to it) regardless of list size.  In almost all other cases though and in particular the average case and worst case, the ToList()/FindIndex() combo wins for performance, even though it is creating some temporary memory to hold the List<T>.  If you examine the algorithm, the reason why is most likely because once it’s in a ToList() form, internally FindIndex() scans the internal array which is much more efficient to iterate over.  Thus, it takes a one time performance hit (not including any GC impact) to create the List<T> but after that the performance is much better. Summary If you’re concerned about too many throw-away objects, you can always roll your own FindIndex() method, but for sheer simplicity and overall performance, using the ToList()/FindIndex() combo performs best on nearly all list sizes in the average and worst cases.    Technorati Tags: C#,.NET,Litte Wonders,BlackRabbitCoder,Software,LINQ,List

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

  • Why is my external USB hard drive sometimes completely inaccessible?

    - by Eliah Kagan
    I have an external USB hard drive, consisting of an 1 TB SATA drive in a Rosewill RX35-AT-SU SLV Aluminum 3.5" Silver USB 2.0 External Enclosure, plugged into my SONY VAIO VGN-NS310F laptop. It is plugged directly into the computer (not through a hub). The drive inside the enclosure is a 7200 rpm Western Digital, but I don't remember the exact model. I can remove the drive from the enclosure (again), if people think it's necessary to know that detail. The drive is formatted ext4. I mount it dynamically with udisks on my Lubuntu 11.10 system, usually automatically via PCManFM. (I have had Lubuntu 12.04 on this machine, and experienced all this same behavior with that too.) Every once in a while--once or twice a day--it becomes inaccessible, and difficult to unmount. Attempting to unmount it with sudo umount ... gives an error message saying the drive is in use and suggesting fuser and lsof to find out what is using it. Killing processes found to be using the drive with fuser and lsof is sometimes sufficient to let me unmount it, but usually isn't. Once the drive is unmounted or the machine is rebooted, the drive will not mount. Plugging in the drive and turning it on registers nothing on the computer. dmesg is unchanged. The drive's access light usually blinks vigorously, as though the drive is being accessed constantly. Then eventually, after I keep the drive off for a while (half an hour), I am able to mount it again. While the drive doesn't work on this machine for a while, it will work immediately on another machine running the same version of Ubuntu. Sometimes bringing it back over from the other machine seems to "fix" it. Sometimes it doesn't. The drive doesn't always stop being accessible while mounted, before becoming unmountable. Sometimes it works fine, I turn off the computer, I turn the computer back on, and I cannot mount the drive. Currently this is the only drive with which I have this problem, but I've had problems that I think are the same as this, with different drives, on different Ubuntu machines. This laptop has another external USB drive plugged into it regularly, which doesn't have this problem. Unplugging that drive before plugging in the "problem" drive doesn't fix the problem. I've opened the drive up and made sure the connections were tight in the past, and that didn't seem to help (any more than waiting the same amount of time that it took to open and close the drive, before attempting to remount it). Does anyone have any ideas about what could be causing this, what troubleshooting steps I should perform, and/or how I could fix this problem altogether? Update: I tried replacing the USB data cable (from the enclosure to the laptop), as Merlin suggested. I should've tried that long ago, since it fits the symptoms perfectly (the drive works on another machine, which would make sense because the cable would be bent at a different angle, possibly completing a circuit of frayed wires). Unfortunately, though, this did not help--I have the same problem with the new cable. I'll try to provide additional detailed information about the drive inside the enclosure, next time I'm able to get the drive working. (At the moment I don't have another machine available to attach it.) Major Update (28 June 2012) The drive seems to have deteriorated considerably. I think this is so, because I've attached it to another machine and gotten lots of errors about invalid characters, when copying files from it. I am less interested in recovering data from the drive than I am in figuring out what is wrong with it. I specifically want to figure out if the problem is the drive or the enclosure. Now, when I plug the drive into the original machine where I was having the problems, it still doesn't appear (including with sudo fdisk -l), but it is recognized by the kernel and messages are added to dmesg. Most of the message consist of errors like this, repeated many times: [ 7.707593] sd 5:0:0:0: [sdc] Unhandled sense code [ 7.707599] sd 5:0:0:0: [sdc] Result: hostbyte=invalid driverbyte=DRIVER_SENSE [ 7.707606] sd 5:0:0:0: [sdc] Sense Key : Medium Error [current] [ 7.707614] sd 5:0:0:0: [sdc] Add. Sense: Unrecovered read error [ 7.707621] sd 5:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 7.707636] end_request: critical target error, dev sdc, sector 0 [ 7.707641] Buffer I/O error on device sdc, logical block 0 Here are all the lines from dmesg starting with when the drive is recognized. Please note that: I'm back to running Lubuntu 12.04 on this machine (and perhaps that's a factor in better error messages). Now that the drive has been plugged into another machine and back into this one, and also now that this machine is back to running 12.04, the drive's access light doesn't blink as I had described. Looking at the drive, it would appear as though it is working normally, with low or no access. This behavior (the errors) occurs when rebooting the machine with the drive plugged in, and also when manually plugging in the drive. A few of the messages are about /dev/sdb. That drive is working fine. The bad drive is /dev/sdc. I just didn't want to edit anything out from the middle.

    Read the article

  • Trouble recovering MySQL InnoDB database after server crash

    - by Andy Shinn
    I had a server crash due to a broken iSCSI link (the filesystem went into read-only mode). After repairing the link and rebooting the machine (CentOS 5 / MySQL 5.1), the MySQL server would not start and gave the following error: 100603 19:11:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100603 19:11:46 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Warning: database page corruption or a failed InnoDB: file read of page 112541. InnoDB: Trying to recover it from the doublewrite buffer. InnoDB: Dump of the page: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): lots of binary data 100603 19:11:46 InnoDB: Page checksum 953720272, prior-to-4.0.14-form checksum 2641912043 InnoDB: stored checksum 617821918, prior-to-4.0.14-form stored checksum 2080617765 InnoDB: Page lsn 115 2632899642, low 4 bytes of lsn at page end 2641594600 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Dump of corresponding page in doublewrite buffer: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): more binary data 100603 19:11:46 InnoDB: Page checksum 908374788, prior-to-4.0.14-form checksum 824841363 InnoDB: stored checksum 912869634, prior-to-4.0.14-form stored checksum 2210927931 InnoDB: Page lsn 115 2635312169, low 4 bytes of lsn at page end 2633173354 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Also the page in the doublewrite buffer is corrupt. InnoDB: Cannot continue operation. InnoDB: You can try to recover the database with the my.cnf InnoDB: option: InnoDB: innodb_force_recovery=6 100603 19:11:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended Per the error message, I have tried setting set-variable=innodb_force_recovery=6 in the my.cnf to get to the data. This allows the MySQL server to start. But when I try to do a mysqldump of the database or a SELECT * INTO OUTFILE "filename" FROM broken_table; it seems to endlessly just export the same line over and over again. I have also tried http://code.google.com/p/innodb-tools/. But this tool fails with an error that 'blob' type is not supported. If I try to access the data using the PHP application it crashes MySQL: `100603 19:19:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=2 max_threads=151 threads_connected=2 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338317 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x15d33f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x453aff00 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x24) [0x874364] /usr/libexec/mysqld(handle_segfault+0x346) [0x5c9166] /lib64/libpthread.so.0 [0x3a6e40eb10] /usr/libexec/mysqld(rec_get_offsets_func+0x30) [0x7cc310] /usr/libexec/mysqld [0x7674d8] /usr/libexec/mysqld(btr_search_info_update_slow+0x638) [0x768d48] /usr/libexec/mysqld(btr_cur_search_to_nth_level+0xc7d) [0x75f86d] /usr/libexec/mysqld [0x7dd1c1] /usr/libexec/mysqld(row_search_for_mysql+0x18b0) [0x7e03d0] /usr/libexec/mysqld(ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)+0x7c) [0x7526fc] /usr/libexec/mysqld(handler::read_multi_range_next(st_key_multi_range**)+0x29) [0x6aed09] /usr/libexec/mysqld(QUICK_RANGE_SELECT::get_next()+0x194) [0x69a964] /usr/libexec/mysqld [0x6aafe9] /usr/libexec/mysqld(sub_select(JOIN*, st_join_table*, bool)+0x56) [0x635196] /usr/libexec/mysqld [0x63f9cd] /usr/libexec/mysqld(JOIN::exec()+0x950) [0x6497c0] /usr/libexec/mysqld(mysql_select(THD*, Item*, TABLE_LIST*, unsigned int, List&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsign ed long long, select_result*, st_select_lex_unit*, st_select_lex*)+0x17b) [0x64b34b] /usr/libexec/mysqld(handle_select(THD*, st_lex*, select_result*, unsigned long)+0x169) [0x64bc79] /usr/libexec/mysqld [0x5d34b6] /usr/libexec/mysqld(mysql_execute_command(THD*)+0x4e5) [0x5d6b45] /usr/libexec/mysqld(mysql_parse(THD*, char const*, unsigned int, char const)+0x211) [0x5dc321] /usr/libexec/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x10b8) [0x5dd3f8] /usr/libexec/mysqld(do_command(THD*)+0xe6) [0x5dd9e6] /usr/libexec/mysqld(handle_one_connection+0x73d) [0x5d036d] /lib64/libpthread.so.0 [0x3a6e40673d] /lib64/libc.so.6(clone+0x6d) [0x3a6d4d3d1d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x15de5e0 = SELECT DISTINCT count(DISTINCT i.itemid) as rowscount,i.hostid FROM items i WHERE ((i.itemid BETWEEN 000000000000000 AND 0999999 99999999)) AND i.type<9 AND (i.hostid IN (10017,10047,10050,10054,10056,10059,10062,10063,10064,10065,10066,10067,10068,10069,10070,10071,10072,10073,100 74,10075,10076,10077,10078,10079,10080,10081,10082,10084,10088,10089,10090,10091,10092,10093,10094,10095,10096,10097,10098,10099,10100,10101,10102,10103,10 104,10105,10106,10107,10108,10109)) GROUP BY i.hostid thd-thread_id=3 thd-killed=NOT_KILLED The manual page at contains information that should help you find out what is causing the crash. 100603 19:19:19 mysqld_safe Number of processes running now: 0 100603 19:19:19 mysqld_safe mysqld restarted` Before recovering form an older backup as a last resort I am looking for anymore suggestions. Thanks!

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • Announcing the New Windows Azure Web Sites Shared Scaling Tier

    - by Clint Edmonson
    Windows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes. Free Shared Reserved # of Sites 10 100 100 Egress 165MB/Day 5GB/Month Included 5GB/Month Included Storage 1GB 1GB 10GB Throttling CPU/Memory/Egress CPU/Memory Unlimited Price Free $.02/hr per site, per instance $.08/hr per core Setting the Stage In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.  The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week. Preview Feedback In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan. New Shared Tier Portal Features In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier. Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site. This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site. Shared Tier Benefits Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model: Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.  Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available. Continuous Deployment from GitHub and CodePlex Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or GitHub.com repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically. Walk-through videos on how to perform these functions are below: Publishing to an Azure Web Site from CodePlex Publishing to an Azure Web Site from GitHub.com These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • The Java Specialist: An Interview with Java Champion Heinz Kabutz

    - by Janice J. Heiss
    Dr. Heinz Kabutz is well known for his Java Specialists’ Newsletter, initiated in November 2000, where he displays his acute grasp of the intricacies of the Java platform for an estimated 70,000 readers; for his work as a consultant; and for his workshops and trainings at his home on the Island of Crete where he has lived since 2006 -- where he is known to curl up on the beach with his laptop to hack away, in between dips in the Mediterranean. Kabutz was born of German parents and raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. He will be leading a two-hour hands-on lab session, HOL6500 – “Finding and Solving Java Deadlocks,” at this year’s JavaOne that will explore what causes deadlocks and how to solve them. Q: Tell us about your JavaOne plans.A: I am arriving on Sunday evening and have just one hands-on-lab to do on Monday morning. This is the first time that a non-Oracle team is doing a HOL at JavaOne under Oracle's stewardship and we are all a bit nervous about how it will turn out. Oracle has been immensely helpful in getting us set up. I have a great team helping me: Kirk Pepperdine, Dario Laverde, Benjamin Evans and Martijn Verburg from jClarity, Nathan Reynolds from Oracle, Henri Tremblay of OCTO Technology and Jeff Genender of Savoir Technologies. Monday will be hard work, but after that, I will hopefully get to network with fellow Java experts, attend interesting sessions and just enjoy San Francisco. Oh, and my kids have already given me a shopping list of things to get, like a GoPro Hero 2 dive housing for shooting those nice videos of Crete. (That's me at the beginning diving down.) Q: What sessions are you attending that we should know about?A: Sometimes the most unusual sessions are the best. I avoid the "big names". They often are spread too thin with all their sessions, which makes it difficult for them to deliver what I would consider deep content. I also avoid entertainers who might be good at presenting but who do not say that much.In 2010, I attended a session by Vladimir Yaroslavskiy where he talked about sorting. Although he struggled to speak English, what he had to say was spectacular. There was hardly anybody in the room, having not heard of Vladimir before. To me that was the highlight of 2010. Funnily enough, he was supposed to speak with Joshua Bloch, but if you remember, Google cancelled. If Bloch has been there, the room would have been packed to capacity.Q: Give us an update on the Java Specialists’ Newsletter.A: The Java Specialists' Newsletter continues being read by an elite audience around the world. The apostrophe in the name is significant.  It is a newsletter for Java specialists. When I started it twelve years ago, I was trying to find non-obvious things in Java to write about. Things that would be interesting to an advanced audience.As an April Fool's joke, I told my readers in Issue 44 that subscribing would remain free, but that they would have to pay US$5 to US$7 depending on their geographical location. I received quite a few angry emails from that one. I would have not earned that much from unsubscriptions. Most readers stay for a very long time.After Oracle bought Sun, the Java community held its breath for about two years whilst Oracle was figuring out what to do with Java. For a while, we were quite concerned that there was not much progress shown by Oracle. My newsletter still continued, but it was quite difficult finding new things to write about. We have probably about 70,000 readers, which is quite a small number for a Java publication. However, our readers are the top in the Java industry. So I don't mind having "only" 70000 readers, as long as they are the top 0.7%.Java concurrency is a very important topic that programmers think they should know about, but often neglect to fully understand. I continued writing about that and made some interesting discoveries. For example, in Issue 165, I showed how we can get thread starvation with the ReadWriteLock. This was a bug in Java 5, which was corrected in Java 6, but perhaps a bit too much. Whereas we could get starvation of writers in Java 5, in Java 6 we could now get starvation of readers. All of these interesting findings make their way into my courseware to help companies avoid these pitfalls.Another interesting discovery was how polymorphism works in the Server HotSpot compiler in Issue 157 and Issue 158. HotSpot can inline methods from interfaces that have only one implementation class in the JVM. When a new subclass is instantiated and called for the first time, the JVM will undo the previous optimization and re-optimize differently.Here is a little memory puzzle for your readers: public class JavaMemoryPuzzle {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzle jmp = new JavaMemoryPuzzle();    jmp.f();  }}When you run this you will always get an OutOfMemoryError, even though the local variable data is no longer visible outside of the code block.So here comes the puzzle, that I'd like you to ponder a bit. If you very politely ask the VM to release memory, then you don't get an OutOfMemoryError: public class JavaMemoryPuzzlePolite {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    for(int i=0; i<10; i++) {      System.out.println("Please be so kind and release memory");    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite();    jmp.f();    System.out.println("No OutOfMemoryError");  }}Why does this work? When I published this in my newsletter, I received over 400 emails from excited readers around the world, most of whom sent me the wrong explanation. After the 300th wrong answer, my replies became unfortunately a bit curt. Have a look at Issue 174 for a detailed explanation, but before you do, put on your thinking caps and try to figure it out yourself. Q: What do you think Java developers should know that they currently do not know?A: They should definitely get to know more about concurrency. It is a tough subject that most programmers try to avoid. Unfortunately we do come in contact with it. And when we do, we need to know how to protect ourselves and how to solve tricky system errors.Knowing your IDE is also useful. Most IDEs have a ton of shortcuts, which can make you a lot more productive in moving code around. Another thing that is useful is being able to read GC logs. Kirk Pepperdine has a great talk at JavaOne that I can recommend if you want to learn more. It's this: CON5405 – “Are Your Garbage Collection Logs Speaking to You?” Q: What are you looking forward to in Java 8?A: I'm quite excited about lambdas, though I must confess that I have not studied them in detail yet. Maurice Naftalin's Lambda FAQ is quite a good start to document what you can do with them. I'm looking forward to finding all the interesting bugs that we will now get due to lambdas obscuring what is really going on underneath, just like we had with generics.I am quite impressed with what the team at Oracle did with OpenJDK's performance. A lot of the benchmarks now run faster.Hopefully Java 8 will come with JSR 310, the Date and Time API. It still boggles my mind that such an important API has been left out in the cold for so long.What I am not looking forward to is losing perm space. Even though some systems run out of perm space, at least the problem is contained and they usually manage to work around it. In most cases, this is due to a memory leak in that region of memory. Once they bundle perm space with the old generation, I predict that memory leaks in perm space will be harder to find. More contracts for us, but also more pain for our customers. Originally published on blogs.oracle.com/javaone.

    Read the article

  • The Java Specialist: An Interview with Java Champion Heinz Kabutz

    - by Janice J. Heiss
    Dr. Heinz Kabutz is well known for his Java Specialists’ Newsletter, initiated in November 2000, where he displays his acute grasp of the intricacies of the Java platform for an estimated 70,000 readers; for his work as a consultant; and for his workshops and trainings at his home on the Island of Crete where he has lived since 2006 -- where he is known to curl up on the beach with his laptop to hack away, in between dips in the Mediterranean. Kabutz was born of German parents and raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. He will be leading a two-hour hands-on lab session, HOL6500 – “Finding and Solving Java Deadlocks,” at this year’s JavaOne that will explore what causes deadlocks and how to solve them. Q: Tell us about your JavaOne plans.A: I am arriving on Sunday evening and have just one hands-on-lab to do on Monday morning. This is the first time that a non-Oracle team is doing a HOL at JavaOne under Oracle's stewardship and we are all a bit nervous about how it will turn out. Oracle has been immensely helpful in getting us set up. I have a great team helping me: Kirk Pepperdine, Dario Laverde, Benjamin Evans and Martijn Verburg from jClarity, Nathan Reynolds from Oracle, Henri Tremblay of OCTO Technology and Jeff Genender of Savoir Technologies. Monday will be hard work, but after that, I will hopefully get to network with fellow Java experts, attend interesting sessions and just enjoy San Francisco. Oh, and my kids have already given me a shopping list of things to get, like a GoPro Hero 2 dive housing for shooting those nice videos of Crete. (That's me at the beginning diving down.) Q: What sessions are you attending that we should know about?A: Sometimes the most unusual sessions are the best. I avoid the "big names". They often are spread too thin with all their sessions, which makes it difficult for them to deliver what I would consider deep content. I also avoid entertainers who might be good at presenting but who do not say that much.In 2010, I attended a session by Vladimir Yaroslavskiy where he talked about sorting. Although he struggled to speak English, what he had to say was spectacular. There was hardly anybody in the room, having not heard of Vladimir before. To me that was the highlight of 2010. Funnily enough, he was supposed to speak with Joshua Bloch, but if you remember, Google cancelled. If Bloch has been there, the room would have been packed to capacity.Q: Give us an update on the Java Specialists’ Newsletter.A: The Java Specialists' Newsletter continues being read by an elite audience around the world. The apostrophe in the name is significant.  It is a newsletter for Java specialists. When I started it twelve years ago, I was trying to find non-obvious things in Java to write about. Things that would be interesting to an advanced audience.As an April Fool's joke, I told my readers in Issue 44 that subscribing would remain free, but that they would have to pay US$5 to US$7 depending on their geographical location. I received quite a few angry emails from that one. I would have not earned that much from unsubscriptions. Most readers stay for a very long time.After Oracle bought Sun, the Java community held its breath for about two years whilst Oracle was figuring out what to do with Java. For a while, we were quite concerned that there was not much progress shown by Oracle. My newsletter still continued, but it was quite difficult finding new things to write about. We have probably about 70,000 readers, which is quite a small number for a Java publication. However, our readers are the top in the Java industry. So I don't mind having "only" 70000 readers, as long as they are the top 0.7%.Java concurrency is a very important topic that programmers think they should know about, but often neglect to fully understand. I continued writing about that and made some interesting discoveries. For example, in Issue 165, I showed how we can get thread starvation with the ReadWriteLock. This was a bug in Java 5, which was corrected in Java 6, but perhaps a bit too much. Whereas we could get starvation of writers in Java 5, in Java 6 we could now get starvation of readers. All of these interesting findings make their way into my courseware to help companies avoid these pitfalls.Another interesting discovery was how polymorphism works in the Server HotSpot compiler in Issue 157 and Issue 158. HotSpot can inline methods from interfaces that have only one implementation class in the JVM. When a new subclass is instantiated and called for the first time, the JVM will undo the previous optimization and re-optimize differently.Here is a little memory puzzle for your readers: public class JavaMemoryPuzzle {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzle jmp = new JavaMemoryPuzzle();    jmp.f();  }}When you run this you will always get an OutOfMemoryError, even though the local variable data is no longer visible outside of the code block.So here comes the puzzle, that I'd like you to ponder a bit. If you very politely ask the VM to release memory, then you don't get an OutOfMemoryError: public class JavaMemoryPuzzlePolite {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    for(int i=0; i<10; i++) {      System.out.println("Please be so kind and release memory");    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite();    jmp.f();    System.out.println("No OutOfMemoryError");  }}Why does this work? When I published this in my newsletter, I received over 400 emails from excited readers around the world, most of whom sent me the wrong explanation. After the 300th wrong answer, my replies became unfortunately a bit curt. Have a look at Issue 174 for a detailed explanation, but before you do, put on your thinking caps and try to figure it out yourself. Q: What do you think Java developers should know that they currently do not know?A: They should definitely get to know more about concurrency. It is a tough subject that most programmers try to avoid. Unfortunately we do come in contact with it. And when we do, we need to know how to protect ourselves and how to solve tricky system errors.Knowing your IDE is also useful. Most IDEs have a ton of shortcuts, which can make you a lot more productive in moving code around. Another thing that is useful is being able to read GC logs. Kirk Pepperdine has a great talk at JavaOne that I can recommend if you want to learn more. It's this: CON5405 – “Are Your Garbage Collection Logs Speaking to You?” Q: What are you looking forward to in Java 8?A: I'm quite excited about lambdas, though I must confess that I have not studied them in detail yet. Maurice Naftalin's Lambda FAQ is quite a good start to document what you can do with them. I'm looking forward to finding all the interesting bugs that we will now get due to lambdas obscuring what is really going on underneath, just like we had with generics.I am quite impressed with what the team at Oracle did with OpenJDK's performance. A lot of the benchmarks now run faster.Hopefully Java 8 will come with JSR 310, the Date and Time API. It still boggles my mind that such an important API has been left out in the cold for so long.What I am not looking forward to is losing perm space. Even though some systems run out of perm space, at least the problem is contained and they usually manage to work around it. In most cases, this is due to a memory leak in that region of memory. Once they bundle perm space with the old generation, I predict that memory leaks in perm space will be harder to find. More contracts for us, but also more pain for our customers.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >