Search Results

Search found 16427 results on 658 pages for 'brendan long'.

Page 87/658 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • C++ WCHAR: Cannot allocate an array of constant size 0

    - by Rosarch
    I'm trying to create a WCHAR: LONG bufferSize = foo.bar() + 1; WCHAR wszBaz[bufferSize]; The compiler issues an error: error C2057: expected constant expression error C2466: cannot allocate an array of constant size 0 error C2133: 'wszBaz' unknown size What am I doing wrong? UPDATE: I added const but it still gives the same error: const LONG bufferSize = foo.bar() + 1; WCHAR wszBaz[bufferSize];

    Read the article

  • syntax question of c++

    - by lukmac
    Hi people: I came across with some code like this that i am not sure with: unsigned long byte_count : 32 unsigned long byte_count2 : 28 What does the ':' play here? Thanks, peace! lukmac

    Read the article

  • In Vim, how to swap 2 non adjacent patterns?

    - by ThG
    I have lines of text, all with the same structure, and would like to make a permutation of 2 elements on all lines: 1257654 some text (which may be long) #Foo 1543098 some other text #Barbar 1238769 whatever #Baz 2456874 something else #Quux I want to obtain : #Foo some text (which may be long) 1257654 #Barbar some other text 1543098 #Baz whatever 1238769 #Quux something else 2456874 This is where I am stuck : :%s/\(\d\{7\}\)\(#.\{-}\)/\2\1/ Where did I go wrong ?

    Read the article

  • Multilingual dates in gridview

    - by Moox
    I'm replacing a website that was previously HTML only. I put a GridView on the French and the English page and I'm getting 2 fields: A date and a currency. For the date, I need to transform it into a long date format. The English part isn't very hard, but when I get to the French page, I can't transform it into a long date format with French words (i.e.:Mardi instead of Tuesday). Is there any way to format the string with French date words instead?

    Read the article

  • c language: printf help

    - by dydx
    Hi again, here is my coding which gives me the error 'warning: unknown conversion type character 0x20 in format' int subtotal; long long a,b,c,d,e,f,g,h,i,j,k,l,m; subtotal = (1*(a+c+e+g+i+k))+(3*(b+d+f+h+j+l)); printf(" = %d % 10 = %d; (10 - %d) % 10 = %lld\n", subtotal,subtotal%10,subtotal%10,m); any idea why this is wrong?

    Read the article

  • Moving to 64-bit on OS X?

    - by ML
    Hi All, What is the best practice for moving to 64-bit on OS X? Using the 10.6 SDK and 64-bit intel as my SDK and target. I have int32 types to change Does OS X have an 'int64' or would one use a 'long long'? Where might I find a resource to available data types? What other issues are there?

    Read the article

  • Dynamic allocation in C

    - by Kerby82
    I'm writing a program and I have the following problem: char *tmp; sprintf (tmp,"%ld",(long)time_stamp_for_file_name); Could someone explain how much memory allocate for the string tmp. How many chars are a long variable? Thank you, I would appreciate also a link to an exahustive resource on this kind of information. Thank you

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • access javascrip variable in java

    - by chetan
    function test(custId){ List<Account> list=accountDelegate.findAccountEntityByGroupId((long)%> custId<%)%>; <script> } Here my question is how to send custId to java method as parameter from java script.where java method takes long argument.

    Read the article

  • Number of the different elements in an array.

    - by AB
    Is it possible to compute the number of the different elements in an array in linear time and constant space? Let us say it's an array of long integers, and you can not allocate an array of length sizeof(long). P.S. Not homework, just curious. I've got a book that sort of implies that it is possible.

    Read the article

  • cut text after (x) amount of characters

    - by Blackbird
    This is in wordpress (not sure that makes a difference) This bit of php outputs the post title <?php echo $data['nameofpost']; ?> It's simple text which can be anywhere up to 100 chars long. What i'd like is if the chars outputted are over 20 long to display '...' or simply nothing at all. Thanks

    Read the article

  • Fix: SqlDeploy Task Fails with NullReferenceException at ExtractPassword

    Still working on getting a TeamCity build working (see my last post).  Latest exception is: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(120, 5): error MSB4018: The "SqlDeployTask" task failed unexpectedly. System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Data.Schema.Common.ConnectionStringPersistence.ExtractPassword(String partialConnection, String dbProvider) at Microsoft.Data.Schema.Common.ConnectionStringPersistence.RetrieveFullConnection(String partialConnection, String provider, Boolean presentUI, String password) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.ConfigureConnectionString(String connectionString, String databaseName) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.OnBuildConnectionString(String partialConnectionString, String databaseName) at Microsoft.Data.Schema.Build.Deployment.FinishInitialize(String targetConnectionString) at Microsoft.Data.Schema.Build.Deployment.Initialize(FileInfo sourceDbSchemaFile, ErrorManager errors, String targetConnectionString) at Microsoft.Data.Schema.Build.DeploymentConstructor.ConstructServiceImplementation() at Microsoft.Data.Schema.Extensibility.ServiceConstructor'1.ConstructService() at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult)   This time searching yielded some good stuff, including this thread that talks about how to resolve this via permissions.  The short answer is that the account that your build server runs under needs to have the necessary permissions in SQL Server.  Youll need to create a Login and then ensure at least the minimum rights are configured as described here: Required Permissions in Database Edition Alternately, you can just make your buildserver account an admin on the database (which is probably running on the same machine anyway) and at that point it should be able to do whatever it needs to. If youre certain the account has the necessary permissions, but youre still getting the error, the problem may be that the account has never logged into the build server.  In this case, there wont be any entry in the HKCU hive in the registry, which the system is checking for permissions (see this thread).  The solution in this case is quite simple: log into the machine (once is enough) with the build server account.  Then, open Visual Studio (thanks Brendan for the answer in this thread). Summary Make sure the build service account has the necessary database permissions Make sure the account has logged into the server so it has the necessary registry hive info Make sure the account has run Visual Studio at least once so its settings are established In my case I went through all 3 of these steps before I resolved the problem. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Running Teamsite User Admin tool IWUSERADM.exe from ASP.NET

    - by Narendra Tiwari
    It has really been a head scratching task for me. I 've tried many options but nothing worked. Finally I found a workaround on google to achive this by TaskScheduler. PROBLEM When we run Teamsite user administration command line tool IWUSERADM.exe though ASP.Net it gives following error: Application popup: cmd.exe - Application Error : The application failed to initialize properly (0xc0000142). Click on OK to terminate the application. CAUSE No specific cause, it seems to be a bug, supposed to be resolved with this Microsoft patch http://support.microsoft.com/kb/960266. and there is nothing related to permission issue, y web application is impersonated with an administrator account. off course running a bat file from dmin account is a potential secury threat but for this scenario lets conifned our discussion to run the command line tool. RESOLUTION I have not tried this patch as I have not permitted to run this patch on server. Below are the steps to achive the requirement. 1/ Create a batch file which runs the IWUSERADM.exe.         echo - Add Teamsite User    CD E:\Appli\GN00\iw-home\bin    iwuseradm add-user %1 2/ Temporarily create a schedule task and run  the .bat file by scheduled task by ASP.Net code using TaskScheduler http://www.codeproject.com/KB/cs/tsnewlib.aspx. 3/ Here is the function: private int AddTeamsiteUser(string strBatchFilePath, string strUser) { //Get a ScheduledTasks object for the local computer. ScheduledTasks st = new ScheduledTasks(); // Create a task Task t; try{ t = st.CreateTask("~AddTeamsiteUser"); } catch { throw new Exception("Schedule Task ~AddTeamsiteUser already exist."); }    t.SetAccountInformation(yourLogin, yourPassword); //Set the account under which the task should run.  t.Save();  t.Run(); Thread.Sleep(2000); //for sync issue //Remove the scheduled task st.DeleteTask("~AddTeamsiteUser"); return t.ExitCode;   Below are few resources related to the above scenario:- - Task Scheduler Class Library for .NET  http://www.codeproject.com/KB/cs/tsnewlib.aspx - Run a .BAT file from ASP.NET  http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx - TaskScheduler Class  http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx - Application Hangs whle running iwuseradm.exe through ASP.Net  http://bytes.com/topic/asp-net/answers/733098-system-diagnostics-process-hangs     t.ApplicationName = strBatchFilePath; t.Parameters = strUser; t.Comment = "Adding user to Teamsite Application"

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • SQLAuthority News – Memories at Anniversary of SQL Wait Stats Book

    - by pinaldave
    SQL Wait Stats About a year ago, I had very proud moment. I had published my second book SQL Server Wait Stats with me as a primary author. It has been a long journey since then. The book got great response and it was widely accepted in the community. It was first of its kind of book written specifically on Wait Stats and Performance. The book was based on my earlier month long series written on the same subject SQL Server Wait Stats. Today, on the anniversary of the book, lots of things come to my mind let me share a few here. Idea behind Blog Series A very common question I often receive is why I wrote a 30 day series on Wait Stats. There were two reasons for it. 1) I have been working with SQL Server for a long time and have troubleshoot more than hundreds of SQL Server which are related to performance tuning. It was a great experience and it taught me a lot of new things. I always documented my experience. After a while I found that I was able to completely rely on my own notes when I was troubleshooting any servers. It is right then I decided to document my experience for the community. 2) While working with wait stats there were a few things, which I thought I knew it well as they were working. However, there was always a fear in the back of mind that what happens if what I believed was incorrect and I was on the wrong path all the time. There was only one way to get it validated. Put it out in front community with my understanding and request further help to improve my understanding. It worked, it worked beautifully. I received plenty of conversations, emails and comments. I refined my content based on various conversations and make it more relevant and near accurate. I guess above two are the major reasons for beginning my journey on writing Wait Stats blog series. Idea behind Book After writing a blog series there was a good amount of request I keep on receiving that I should convert it to eBook or proper book as reading blog posts is great but it goes not give a comprehensive understanding of the subject. The very common feedback from users who were beginning the subject that they will prefer to read it in a structured method. After hearing the feedback for more than 4 months, I decided to write a book based on the blog posts. When I envisioned book, I wanted to make sure this book addresses the wait stats concepts from the fundamentals and fill the gaps of blogs I wrote earlier. Rick Morelan and Joes 2 Pros Team I must acknowledge my co-author Rick Morelan for his unconditional support in writing this book. I had already authored one book before I published this book. The experience to write the book was out of the world. Writing blog posts are much much easier than writing books. The efforts it takes to write a book is 100 times more even though the content is ready. I could have not done it myself if there was not tremendous support of my co-author and editor’s team. We spend days and days researching and discussing various concepts covered in the book. When we were in doubt we reached out to experts as well did a practical reproduction of the scenarios to validate the concepts and claims. After continuous 3 months of hard work we were able to get this book out in the community. September 1st – the lucky day Well, we had to select any day to publish the books. When book was completed in August last week we felt very glad. We all had worked hard and having a sample draft book in hand was feeling like having a newborn baby in our hand. Every time my books are published I feel the same joy which I had when my daughter was born. The feeling of holding a new book in hand is the (almost) same feeling as holding newborn baby. I am excited. For me September 1st has been the luckiest day in mind life. My daughter Shaivi was born on September 1st. Since then every September first has been excellent day and have taken me to the next step in life. I believe anything and everything I do on September 1st it is turning out to be successful and blessed. Rick and I had finished a book in the last week of August. We sent it to the publisher (printer) and asked him to take the book live as soon as possible. We did not decide on any date as we wanted the book to get out as fast as it can. Interesting enough, the publisher/printer selected September 1st for publishing the book. He published the book on 1st September and I knew it at the same time that this book will go next level. Book Model – The Most Beautiful Girl We were done with book. We had no budget left for marketing. Rick and I had a long conversation regarding how to spread the words for the book so it can reach to many people. While we were talking about marketing Rick come up with the idea that we should hire a most beautiful girl around who acknowledge our book and genuinely care for book. It was a difficult task and Rick asked me to find a more beautiful girl. I am a father and the most beautiful girl for me my daughter. This was not a difficult task for me. Rick had given me task to find the most beautiful girl and I just could not think of anyone else than my own daughter. I still do not know what Rick thought about this idea but I had already made up my mind. You can see the detailed blog post here. The Fun Experiments Book Signing Event We had lots of fun moments along this book. We have given away more books to people for free than we have sold them actually. We had done book signing events, contests, and just plain give away when we found people can be benefited from this book. There was never an intention to make money and get rich. We just wanted that more and more people know about this new concept and learn from it. Today when I look back to the earnings there is nothing much we have earned if you talk about dollars. However the best reward which we have received is the satisfaction and love of community. The amount of emails, conversations we have so far received for this book is over thousands. We had fun writing this book, it was indeed a very satisfying journey. I have earned lots of friends while learning and exploring. Availability The book is one year old but still very relevant when it is about performance tuning. It is available at various online book stores. If you have read the book, do let me know what you think of it. Amazon | Kindle | Flipkart | Indiaplaza Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 2 of 2

    - by pinaldave
    This is the second part of the two part series of Practices for Software Startup Pluralsight Course. Please read the first part of this series over here. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik. Personal Learning Schedule After these three sessions it was 6:30 am and time to do my own blog.  But for the rest of the day, I kept thinking about the course, and wanted to go back and finish.  I was wishing that I had woken up at 3 am so I could finish all at one go.  All day long I was digesting what I had learned.  At 10 pm, after my daughter had gone to bed, I sighed on again.  I was not disappointed by the long wait.  As I mentioned before, Stephen has started four to six companies, and all of them are very successful today. Here is the video I promised yesterday – it discusses the importance of Right Sizing Your Startup. The Heartbeat of Startup – Technology Stephen has combined all technology knowledge into one 30 minute session.  He discussed  how to start your project, how to deal with opinions, and how to deal with multiple ideas – every start up has multiple directions it can go. He spent a lot of time emphasized deciding which direction to go and how to decide which will be the best for you.  He called it a continuous development cycle. One of the biggest hazards for a start-up company is one person deciding the direction the company will go, until down the road another team member announces that there is a glitch in their part of the work and that everyone will have to start over.  Even though a team of two or five people can move quickly, often the decision has gone too long and cannot be easily fixed.   Stephen used an example from his own life:  he was biased for one type of technology, and his teammate for another.  In the end they opted for his teammate’s  choice , and in the end it was a good decision, even though he was unfamiliar with that particular program.  He argues that technology should not be a barrier to progress, that you cannot rely on your experience only.  This really spoke to me because I am a big fan of SQL, but I know there is more out there, and I should be more open to it.  I give my thanks to Stephen, I learned something in this module besides startups. Money, Success and Epic Win! The longest, but most interesting, the module was funding your start-up.  You need to fund the start-up right at the very beginning, if not done right you will run into trouble.  The good news is that a few years ago start-ups required a lot more money – think millions of dollars – but now start-ups can get off the ground for thousands.  Stephen used an example of a company that years ago would have needed a million dollars, but today could be started for $600.  It is true that things have changed, but you still need money.  For $600 you can start small and add dynamically, as needed.  But the truth is that if you have $600, $6000, or $6 million, it will be spent.  Don’t think of it as trying to save money, think of it as investing in your future.   You will need money, and you will need to (quickly) decide what you do with the money: shares, stakeholders, investing in a team, hiring a CEO.  This is so important because once you have money and start the company, the company IS your money.  It is your biggest currency – having a percentage of ownership in the company.  Investors will want percentages as repayment for their investment, and they will want a say in the business as well.  You will have to decide how far you will dilute your shares, and how the company will be divided, if at all.  If you don’t plan in advance, you will find that after gaining three or four investors, suddenly you are the minority owner in your own dream.  You need to understand funding carefully.  This single module is worth all the money you would have spent on the whole course alone.  I encourage everyone to listen to this single module even if they don’t watch any of the others.     Press End to Start the Game – Exists! The final module is exit strategies.  You did all this work, dealt with all political and legal issues.  What are you going to get out of it? The answer is simple: money.  Maybe you want your company to be bought out, for you talent to bring you a profit.  You can sell the company to someone and still head it.  Many options are available.  You could sell and still work as an employee but no longer own the company.  There are many exit strategies.  This is where all your hard work comes into play.  It is important not to feel fooled at any step.  There are so many good ideas that end up in the garbage because of poor planning, so that if you find yourself successful, you don’t want to blow it at this step!  The exit is important.  I thought that this aspect of the course was completely unique, and I loved Stephen’s point of view.  I was lost deep in thought after this module ended.  I actually took two hours worth of notes on this section alone – and it was only a three hour course.  I am planning on attending this course one more time next week, just to catch up on all the small bits of wisdom I’m sure I missed. Thank you Stephen for bringing your real world experience with us!  I recommend that everyone attends this course, even if they don’t want to begin their own start-up company. It was indeed a long day for me. Do not forget to read part 1 of this story and attend course Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Thread scheduling Round Robin / scheduling dispatch

    - by MRP
    #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <semaphore.h> #define NUM_THREADS 4 #define COUNT_LIMIT 13 int done = 0; int count = 0; int quantum = 2; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,7}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void * inc_count(void * arg); static sem_t count_sem; int quit = 0; ///////// Inc_Count//////////////// void *inc_count(void *t) { long my_id = (long)t; int i; sem_wait(&count_sem); /////////////CRIT SECTION////////////////////////////////// printf("run_thread = %d\n",my_id); printf("%d \n",thread_runtime[my_id]); for( i=0; i < thread_runtime[my_id];i++) { printf("runtime= %d\n",thread_runtime[my_id]); pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n",my_id, count); pthread_mutex_unlock(&count_mutex); sleep(1) ; }//End For sem_post(&count_sem); // Next Thread Enters Crit Section pthread_exit(NULL); } /////////// Count_Watch //////////////// void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } ////////////////// Main //////////////// int main (int argc, char *argv[]) { int i; long t1=0, t2=1, t3=2, t4=3; pthread_t threads[4]; pthread_attr_t attr; sem_init(&count_sem, 0, 1); /* Initialize mutex and condition variable objects */ pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); /* For portability, explicitly create threads in a joinable state */ pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); /* Wait for all threads to complete */ for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); /* Clean up and exit */ pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } I am trying to learn thread scheduling, there is a lot of technical coding that I don't know. I do know in theory how it should work, but having trouble getting started in code... I know, at least I think, this program is not real time and its not meant to be. Some how I need to create a scheduler dispatch to control the threads in the order they should run... RR FCFS SJF ect. Right now I don't have a dispatcher. What I do have is semaphores/ mutex to control the threads. This code does run FCFS... and I have been trying to use semaphores to create a RR.. but having a lot of trouble. I believe it would be easier to create a dispatcher but I dont know how. I need help, I am not looking for answers just direction.. some sample code will help to understand a bit more. Thank you.

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >