Search Results

Search found 23949 results on 958 pages for 'test me'.

Page 189/958 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • fopen() fails to open stream: permission denied, yet permissions should be valid

    - by about blank
    So, I have this error: Warning: fopen(/path/to/test-in.txt) [function.fopen]: failed to open stream: Permission denied Performing ls -l in the directory where test-in.txt is produces the following output: -rw-r--r-- 1 $USER $USER 1921 Sep 6 20:09 test-in.txt -rw-r--r-- 1 $USER $USER 0 Sep 6 20:08 test-out.txt In order to get past this, I decided to perform the following: chgrp -R www-data /path/to/php/webroot And then did: chmod g+rw /path/to/php/webroot Yet, I still get this error when I run my php5 script to open the file. Why is this happening? I've tried this using LAMP as well as cherokee through CGI, so it can't be this. Is there a solution of some sort? Edit I'll also add that I'm just developing via localhost right now. Update - PHP fopen() line $fullpath = $this->fileRoot . $this->fileInData['fileName']; $file_ptr = fopen( $fullpath, 'r+' ); I should also mention I'd like to stick with Cherokee if possible. What's this deal about setting file permissions for Apache/Cherokee?

    Read the article

  • static member specialization of templated child class and templated base class

    - by b3nj1
    I'm trying to have a templated class (here C) that inherits from another templated class (here A) and perform static member specialization (of int var here), but I cant get the right syntax to do so (if it's possible #include <iostream> template<typename derived> class A { public: static int var; }; //This one works fine class B :public A<B> { public: B() { std::cout << var << std::endl; } }; template<> int A<B>::var = 9; //This one doesn't works template<typename type> class C :public A<C<type> > { public: C() { std::cout << var << std::endl; } }; //template<> template<typename type> int A<C<type> >::a = 10; int main() { B b; C<int> c; return 0; } I put an example that works with a non templated class (here B) and i can get the static member specialization of var, but for C that just doesn't work. Here is what gcc tells me : test.cpp: In constructor ‘C<type>::C()’: test.cpp:29:26: error: ‘var’ was not declared in this scope test.cpp: At global scope: test.cpp:34:18: error: template definition of non-template ‘int A<C<type> >::a’ I'm using gcc version 4.6.3, thanks for any help

    Read the article

  • Problem with routes in functional testing

    - by Wishmaster
    Hi, I'm making a simple test project to prepare myself for my test. I'm fairly new to nested resources, in my example I have a newsitem and each newsitem has comments. The routing looks like this: resources :comments resources :newsitems do resources :comments end I'm setting up the functional tests for comments at the moment and I ran into some problems. This will get the index of the comments of a newsitem. @newsitem is declared in the setup ofc. test "should get index" do get :index,:newsitem_id => @newsitem assert_response :success assert_not_nil assigns(:newsitem) end But the problem lays here, in the "should get new". test "should get new" do get new_newsitem_comment_path(@newsitem) assert_response :success end I'm getting the following error. ActionController::RoutingError: No route matches {:controller=>"comments", :action=>"/newsitems/1/comments/new"} But when I look into the routes table, I see this: new_newsitem_comment GET /newsitems/:newsitem_id/comments/new(.:format) {:action=>"new", :controller=>"comments"} Can't I use the name path or what I'm doing wrong here? Thanks in advance.

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • Unit-testing a directive with isolated scope and bidirectional value

    - by unludo
    I want to unit test a directive which looks like this: angular.module('myApp', []) .directive('myTest', function () { return { restrict: 'E', scope: { message: '='}, replace: true, template: '<div ng-if="message"><p>{{message}}</p></div>', link: function (scope, element, attrs) { } }; }); Here is my failing test: describe('myTest directive:', function () { var scope, compile, validHTML; validHTML = '<my-test message="message"></my-test>'; beforeEach(module('myApp')); beforeEach(inject(function($compile, $rootScope){ scope = $rootScope.$new(); compile = $compile; })); function create() { var elem, compiledElem; elem = angular.element(validHTML); compiledElem = compile(elem)(scope); scope.$digest(); return compiledElem; } it('should have a scope on root element', function () { scope.message = 'not empty'; var el = create(); console.log(el.text()); expect(el.text()).toBeDefined(); expect(el.text()).not.toBe(''); }); }); Can you spot why it's failing? The corresponding jsFiddle Thanks :)

    Read the article

  • Create an Oracle function that returns a table

    - by Craig
    I'm trying to create a function in package that returns a table. I hope to call the function once in the package, but be able to re-use its data mulitple times. While I know I create temp tables in Oracle, I was hoping to keep things DRY. So far, this is what I have: Header: CREATE OR REPLACE PACKAGE TEST AS TYPE MEASURE_RECORD IS RECORD ( L4_ID VARCHAR2(50), L6_ID VARCHAR2(50), L8_ID VARCHAR2(50), YEAR NUMBER, PERIOD NUMBER, VALUE NUMBER ); TYPE MEASURE_TABLE IS TABLE OF MEASURE_RECORD; FUNCTION GET_UPS( TIMESPAN_IN IN VARCHAR2 DEFAULT 'MONTLHY', STARTING_DATE_IN DATE, ENDING_DATE_IN DATE ) RETURN MEASURE_TABLE; END TEST; Body: CREATE OR REPLACE PACKAGE BODY TEST AS FUNCTION GET_UPS ( TIMESPAN_IN IN VARCHAR2 DEFAULT 'MONTLHY', STARTING_DATE_IN DATE, ENDING_DATE_IN DATE ) RETURN MEASURE_TABLE IS T MEASURE_TABLE; BEGIN SELECT ... INTO T FROM ... ; RETURN T; END GET_UPS; END TEST; The header compiles, the body does not. One error message is 'not enough values', which probably means that I should be selecting into the MEASURE_RECORD, rather than the MEASURE_TABLE. What am I missing?

    Read the article

  • How can I execute a Java program within a php script?

    - by user450775
    I am writing a simple web upload script. The goal is to upload a file using php, and then calling a java program to process this file. I have done the work for uploading the file, but I cannot get a java program to be successfully run from within the php script. I have tried exec(), shell_exec(), and system() with no results. For the command, I have used "java Test", "java < directory /Test", "/usr/bin/java < directory /Test", I have even set up the application as a jar file with no results. The actual line of code I have used is: echo shell_exec("java Test"); Usually there is no output. However, if I have just shell_exec("java"), then the last line of the help from java ("show splash screen with specified image") is displayed, which shows that the command has been executed. If I use, for example, shell_exec("whoami") I get "nobody" returned, which is correct. The only thing the java file does is create a file so that I can see that the application has been successfully run (the application runs successfully if I run it on the command line). I have set the permissions for the java file to 777 to rule out any possibility of permission errors. I have been struggling with this for a while trying all sorts of options with no results - the file is never created (the file is created with an absolute path so it's not being created and I just can't find the file). Does anyone have any ideas? Thanks.

    Read the article

  • Event triggering inside prototype

    - by shivesh
    When I try to call "Test" function I get an error. How to fix that? (no jquery!) Browser:firefox error: TypeError: this.Test is not a function <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>Untitled Document</title> <script type="text/javascript"> MyClass = function(){ } MyClass.prototype = { Init: function(){ var txt = document.getElementById("text"); if (txt.addEventListener) { txt.addEventListener("keyup", this.Foo, true) } }, Foo: function(){ this.Test(); }, Test: function(){ alert('OK'); } } window.onload = function(){ obj = new MyClass; obj.Init(); } </script> </head> <body> <textarea id="text" rows="10"> </textarea> </div> </body>

    Read the article

  • jQuery class selector and click()

    - by Anonymous
    I'm currently trying to get an alert to happen when something is clicked. I can get it working in jsFiddle, but not in production code: jsFiddle example that works (jQuery 1.5 loaded) HTML (in case jsFiddle is inaccessible): <!DOCTYPE HTML><html><head><title>Test</title></head> <body> <h1>25 Feb 2011</h1><h3>ABC</h3><ul> <li class="todoitem">Test&mdash;5 minutes</li> </ul> </body></html> Javascript: $(".todoitem").click(function() { alert('Item selected'); }); Non-working production example: <!DOCTYPE HTML><html><head><title>Test</title> <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.5.1.min.js" type="text/javascript"></script> <script type="text/javascript"> $(".todoitem").click(function() { alert('Item selected'); }); </script> </head> <body> <h1>25 Feb 2011</h1><h3>ABC</h3><ul><li class="todoitem">Test&mdash;5 minutes</li></ul> </body> </html> Safari's Inspector indicate that jQuery is being loaded correctly, so that's not the issue. As far as I can tell, these two pieces of code are essentially identical, but the latter isn't working. Can anyone see what I've done wrong?

    Read the article

  • jQuery.getJSON: how to avoid requesting the json-file on every refresh? (caching)

    - by Mr. Bombastic
    in this example you can see a generated HTML-list. On every refresh the script requests the data-file (ajax/test.json) and builds the list again. The generated file "ajax/test.json" is cached statically. But how can I avoid requesting this file on every refresh? // source: jquery.com $.getJSON('ajax/test.json', function(data) { var items = []; $.each(data, function(key, val) { items.push('<li id="' + key + '">' + val + '</li>'); }); $('<ul/>', { 'class': 'my-new-list', html: items. }).appendTo('body'); }); This doesn't work: list_data = $.cookie("list_data"); if (list_data == undefined || list_data == "") { $.getJSON('ajax/test.json', function(data) { list_data = data; }); } var items = []; $.each(data, function(key, val) { items.push('<li id="' + key + '">' + val + '</li>'); }); $('<ul/>', { 'class': 'my-new-list', html: items. }).appendTo('body'); Thanks in advance!

    Read the article

  • How can I set paperclip's storage mechanism based on the current Rails environment?

    - by John Reilly
    I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models... if Rails.env.production? has_attached_file :image, :styles => {...}, :storage => :s3, # ...etc... else has_attached_file :image, :styles => {...}, :storage => :filesystem, # ...etc... end Any advice or suggestions would be greatly appreciated! :-)

    Read the article

  • Strange behavior when overloading methods in Java

    - by Sep
    I came across this weird (in my opinion) behavior today. Take this simple Test class: public class Test { public static void main(String[] args) { Test t = new Test(); t.run(); } private void run() { List<Object> list = new ArrayList<Object>(); list.add(new Object()); list.add(new Object()); method(list); } public void method(Object o) { System.out.println("Object"); } public void method(List<Object> o) { System.out.println("List of Objects"); } } It behaves the way you expect, printing "List of Objects". But if you change the following three lines: List<String> list = new ArrayList<String>(); list.add(""); list.add(""); you will get "Object" instead. I tried this a few other ways and got the same result. Is this a bug or is it a normal behavior? And if it is normal, can someone explain why? Thanks.

    Read the article

  • Dimensions of a collection, and how to traverse it in an efficient, elegant manner

    - by Bruce Ferguson
    I'm trying to find an elegant way to deal with multi-dimensional collections in Scala. My understanding is that I can have up to a 5 dimensional collection using tabulate, such as in the case of the following 2-Dimensional array: val test = Array.tabulate[Double](row,col)(_+_) and that I can access the elements of the array using for(i<-0 until row) { for(j<-0 until col) { test(i)(j) = 0.0 } } If I don't know a priori what I'm going to be handling, what might be a succinct way of determining the structure of the collection, and spanning it, without doing something like: case(Array(x)) => for(i<-1 until dim1) { test(i) = 0.0 } case(Array(x,y)) => for(i<-1 until dim1) { for(j<-1 until dim2) { test(i)(j) = 0.0 } } case(Array(x,y,z)) => ... The dimensional values n1, n2, n3, etc... are private, right? Also, would one use the same trick of unwrapping a 2-D array into a 1-D vector when dealing with n-Dimensional objects if I want a single case to handle the traversal? Thanks in advance Bruce

    Read the article

  • Unit Tests Architecture Question

    - by Tom Tresansky
    So I've started to layout unit tests for the following bit of code: public interface MyInterface { void MyInterfaceMethod1(); void MyInterfaceMethod2(); } public class MyImplementation1 implements MyInterface { void MyInterfaceMethod1() { // do something } void MyInterfaceMethod2() { // do something else } void SubRoutineP() { // other functionality specific to this implementation } } public class MyImplementation2 implements MyInterface { void MyInterfaceMethod1() { // do a 3rd thing } void MyInterfaceMethod2() { // do something completely different } void SubRoutineQ() { // other functionality specific to this implementation } } with several implementations and the expectation of more to come. My initial thought was to save myself time re-writing unit tests with something like this: public abstract class MyInterfaceTester { protected MyInterface m_object; @Setup public void setUp() { m_object = getTestedImplementation(); } public abstract MyInterface getTestedImplementation(); @Test public void testMyInterfaceMethod1() { // use m_object to run tests } @Test public void testMyInterfaceMethod2() { // use m_object to run tests } } which I could then subclass easily to test the implementation specific additional methods like so: public class MyImplementation1Tester extends MyInterfaceTester { public MyInterface getTestedImplementation() { return new MyImplementation1(); } @Test public void testSubRoutineP() { // use m_object to run tests } } and likewise for implmentation 2 onwards. So my question really is: is there any reason not to do this? JUnit seems to like it just fine, and it serves my needs, but I haven't really seen anything like it in any of the unit testing books and examples I've been reading. Is there some best practice I'm unwittingly violating? Am I setting myself up for heartache down the road? Is there simply a much better way out there I haven't considered? Thanks for any help.

    Read the article

  • Issue pushing object into an array JS

    - by Javacadabra
    I'm having an issue placing an object into my array in javascript. This is the code: $('.confirmBtn').click(function(){ //Get reference to the Value in the Text area var comment = $("#comments").val(); //Create Object var orderComment = { 'comment' : comment }; //Add Object to the Array productArray.push(orderComment); //update cookie $.cookie('order_cookie', JSON.stringify(productArray), { expires: 1, path: '/' }); }); However when I print the array this is the output: Array ( [0] => Array ( [stockCode] => CBL202659/A [quantity] => 8 ) [1] => Array ( [stockCode] => CBL201764 [quantity] => 6 ) [2] => TEST TEST ) I would like it to look like this: Array ( [0] => Array ( [stockCode] => CBL202659/A [quantity] => 8 ) [1] => Array ( [stockCode] => CBL201764 [quantity] => 6 ) [2] Array( [comment] => TEST TEST ) I added products to the array in a similar way and it worked fine: var productArray = []; // Will hold order Items $(".orderBtn").click(function(event){ //Check to ensure quantity > 0 if(quantity == 0){ console.log("Quantity must be greater than 0") }else{//It is so continue //Show the order Box $(".order-alert").show(); event.preventDefault(); //Get reference to the product clicked var stockCode = $(this).closest('li').find('.stock_code').html(); //Get reference to the quantity selected var quantity = $(this).closest('li').find('.order_amount').val(); //Order Item (contains stockCode and Quantity) - Can add whatever data I like here var orderItem = { 'stockCode' : stockCode, 'quantity' : quantity }; //Check if cookie exists if($.cookie('order_cookie') === undefined){ console.log("Creating new cookie"); //Add object to the Array productArray.push(orderItem);

    Read the article

  • Why does false invalidate validates_presence_of?

    - by DJTripleThreat
    Ok steps to reproduce this: prompt> rails test_app prompt> cd test_app prompt> script/generate model event_service published:boolean then go into the migration and add not null and default published to false: class CreateEventServices < ActiveRecord::Migration def self.up create_table :event_services do |t| t.boolean :published, :null => false, :default => false t.timestamps end end def self.down drop_table :event_services end end now migrate your changes and run your tests: prompt>rake db:migrate prompt>rake You should get no errors at this time. Now edit the model so that you validate_presence_of published: class EventService < ActiveRecord::Base validates_presence_of :published end Now edit the unit test event_service_test.rb: require 'test_helper' class EventServiceTest < ActiveSupport::TestCase test "the truth" do e = EventServer.new e.published = false assert e.valid? end end and run rake: prompt>rake You will get an error in the test. Now set e.published to true and rerun the test. IT WORKS! I think this probably has something to do with the field being boolean but I can't figure it out. Is this a bug in rails? or am I doing something wrong?

    Read the article

  • Rails 4 testing bug?

    - by Jamato
    Situation: if we add two identic line items into a cart, we update line item quantity instead of adding a duplicate.In browser everything works fine but in unit testing section something fails because of an empty cycle in code. Which I wanted to use to update all prices. Why? Is that a unit test engine bug? LineItem.all and cart.line_items in process of testing produce two DIFFERENT structures. #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 2, price: #<BigDecimal:ba0fb544,'0.4E1',9(27)>> #<LineItem id: 980190964, product_id: 1, cart_id: 999, created_at: "2014-06-01 00:21:28", updated_at: "2014-06-01 00:21:28", quantity: 1, price: #<BigDecimal:ba0d1b04,'0.4E1',9(27)>> cart.line_items guy did not update quantity Code itself (produces LineItem which is then saved in line_item_controller which calls this method) class Cart < ActiveRecord::Base has_many :line_items, dependent: :destroy def add_product(product_id) # LOOK THIS CYCLE BREAKS UNIT TEST, SRSLY, I MEAN IT line_items.each do |item| end current_item = line_items.find_by(product_id: product_id) fresh_price = Product.find_by(id: product_id).price if current_item current_item.quantity += 1 else current_item = line_items.build(product_id: product_id, price: fresh_price) end return current_item end ... Unit test code test "non-unique item added" do cart = Cart.new(:id => 999) line_item0 = cart.add_product(2) line_item0.save line_item1 = cart.add_product(1) line_item1.save assert_equal 2, cart.line_items.size #success line_item2 = cart.add_product(1) line_item2.save assert_equal 2, cart.line_items.size, "what?" assert cart.total_price > 15 #fail, prices are not enough, quantity of product1 = 1 #we get total price from quantity, it's a simple method in model end And once again: IT DOES WORK in browser as it should. Even with cycle. I feel so dumb right now...

    Read the article

  • Java PropertyChangeListener

    - by Laphroaig
    Hi, i'm trying to figure out how to listen a property change on another class. this is my code: class with the property to listen: public class ClassWithProperty { private PropertyChangeSupport changes = new PropertyChangeSupport(this); private int usersOnline; public int getUsersOnline() { return usersOnline; } public ClassWithProperty() { usersOnline = 0; while (usersOnline<10) { changes.firePropertyChange("usersOnline", usersOnline, usersOnline++); } } public void addPropertyChangeListener( PropertyChangeListener l) { changes.addPropertyChangeListener(l); } public void removePropertyChangeListener( PropertyChangeListener l) { changes.removePropertyChangeListener(l); } } class where i need to know when the property change: public class Main { private static ClassWithProperty test; public static void main(String[] args) { test = new ClassWithProperty(); test.addPropertyChangeListener(listen()); } private static PropertyChangeListener listen() { System.out.println(test.getUsersOnline()); return null; } } I have the event fired only the last time (usersOnline=10). Sorry if it can be a stupid question, i'm learning now java and can't find a solution.

    Read the article

  • Drupal 7 - I can't pass post data in module function

    - by user2603290
    I can't pass post data in my custom module. filenames: mymodule.info mymodule.mod .info name = My Module description = My custom module. package = DEV version = 1.0 core = 7.x .module <?php function mymodule_menu() { $items = array(); $items['getcountries'] = array( 'title' => 'Get Countries', 'page callback' => 'getcountries', 'access arguments' => array('access content'), 'type' => MENU_CALLBACK, ); $items['getstates'] = array( 'title' => 'Get States', 'page callback' => 'getstates', 'access arguments' => array('access content'), 'type' => MENU_CALLBACK, ); return $items; } function getcountries() { $result = db_query("select distinct(country) from region"); $jsonarray = Array(); foreach ($result as $record) { $jsonarray[] = array( 'item' => $record->country, 'value' => $record->country ); } $json = json_encode($jsonarray); echo $json; } function getstates() { echo $_POST["test"]; } Ajax call $(document).ready(function(){ $.ajax({ url: '/getstates', type: 'POST', data: '{"test":"1"}', success : function () { alert('ok'); }, error : function (jqXHR, textStatus, errorThrown) { alert('error'); } }); }); The first item "getcountries" is working fine however the second one is not. I can browse to http://mysite.com/getstates ok but when I call this function using ajax it is not passing the value of "test" which is "1" to $_POST["test"]. I am new to Drupal so I am positive that I miss something here. I thought I need a new set of eyes.

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Creating New Scripts Dynamically in Lua

    - by bazola
    Right now this is just a crazy idea that I had, but I was able to implement the code and get it working properly. I am not entirely sure of what the use cases would be just yet. What this code does is create a new Lua script file in the project directory. The ScriptWriter takes as arguments the file name, a table containing any arguments that the script should take when created, and a table containing any instance variables to create by default. My plan is to extend this code to create new functions based on inputs sent in during its creation as well. What makes this cool is that the new file is both generated and loaded dynamically on the fly. Theoretically you could get this code to generate and load any script imaginable. One use case I can think of is an AI that creates scripts to map out it's functions, and creates new scripts for new situations or environments. At this point, this is all theoretical, though. Here is the test code that is creating the new script and then immediately loading it and calling functions from it: function Card:doScriptWriterThing() local scriptName = "ScriptIAmMaking" local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() local loadedScript = require (scriptName) local scriptInstance = loadedScript:new("sayThis") print(scriptInstance:get_name()) --will print test print(scriptInstance:get_one()) -- will print 1 scriptInstance:set_one(10000) print(scriptInstance:get_one()) -- will print 10000 print(scriptInstance:get_argumentName()) -- will print sayThis scriptInstance:set_argumentName("saySomethingElse") print(scriptInstance:get_argumentName()) --will print saySomethingElse end Here is ScriptWriter.lua local ScriptWriter = {} local twoSpaceIndent = " " local equalsWithSpaces = " = " local newLine = "\n" --scriptNameToCreate must be a string --argumentsForNew and instanceVariablesToCreate must be tables and not nil function ScriptWriter:new(scriptNameToCreate, argumentsForNew, instanceVariablesToCreate) local instance = setmetatable({}, { __index = self }) instance.name = scriptNameToCreate instance.newArguments = argumentsForNew instance.instanceVariables = instanceVariablesToCreate instance.stringList = {} return instance end function ScriptWriter:makeFileForLoadedSettings() self:buildInstanceMetatable() self:buildInstanceCreationMethod() self:buildSettersAndGetters() self:buildReturn() self:writeStringsToFile() end --very first line of any script that will have instances function ScriptWriter:buildInstanceMetatable() table.insert(self.stringList, "local " .. self.name .. " = {}" .. newLine) table.insert(self.stringList, newLine) end --every script made this way needs a new method to create its instances function ScriptWriter:buildInstanceCreationMethod() --new() function declaration table.insert(self.stringList, ("function " .. self.name .. ":new(")) self:buildNewArguments() table.insert(self.stringList, ")" .. newLine) --first line inside :new() function table.insert(self.stringList, twoSpaceIndent .. "local instance = setmetatable({}, { __index = self })" .. newLine) --add designated arguments inside :new() self:buildNewArgumentVariables() --create the instance variables with the loaded values for key,value in pairs(self.instanceVariables) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. key .. equalsWithSpaces .. value .. newLine) end --close the :new() function table.insert(self.stringList, twoSpaceIndent .. "return instance" .. newLine) table.insert(self.stringList, "end" .. newLine) table.insert(self.stringList, newLine) end function ScriptWriter:buildNewArguments() --if there are arguments for :new(), add them for key,value in ipairs(self.newArguments) do table.insert(self.stringList, value) table.insert(self.stringList, ", ") end if next(self.newArguments) ~= nil then --makes sure the table is not empty first table.remove(self.stringList) --remove the very last element, which will be the extra ", " end end function ScriptWriter:buildNewArgumentVariables() --add the designated arguments to :new() for key, value in ipairs(self.newArguments) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. value .. equalsWithSpaces .. value .. newLine) end end --the instance variables need separate code because their names have to be the key and not the argument name function ScriptWriter:buildSettersAndGetters() for key,value in ipairs(self.newArguments) do self:buildArgumentSetter(value) self:buildArgumentGetter(value) table.insert(self.stringList, newLine) end for key,value in pairs(self.instanceVariables) do self:buildInstanceVariableSetter(key, value) self:buildInstanceVariableGetter(key, value) table.insert(self.stringList, newLine) end end --code for arguments passed in function ScriptWriter:buildArgumentSetter(variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. variable .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. variable .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildArgumentGetter(variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. variable .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. variable .. newLine) table.insert(self.stringList, "end" .. newLine) end --code for instance variable values passed in function ScriptWriter:buildInstanceVariableSetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. key .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. key .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildInstanceVariableGetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. key .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. key .. newLine) table.insert(self.stringList, "end" .. newLine) end --last line of any script that will have instances function ScriptWriter:buildReturn() table.insert(self.stringList, "return " .. self.name) end function ScriptWriter:writeStringsToFile() local fileName = (self.name .. ".lua") file = io.open(fileName, 'w') for key,value in ipairs(self.stringList) do file:write(value) end file:close() end return ScriptWriter And here is what the code provided will generate: local ScriptIAmMaking = {} function ScriptIAmMaking:new(argumentName) local instance = setmetatable({}, { __index = self }) instance.argumentName = argumentName instance.name = 'test' instance.one = 1 return instance end function ScriptIAmMaking:set_argumentName(newValue) self.argumentName = newValue end function ScriptIAmMaking:get_argumentName() return self.argumentName end function ScriptIAmMaking:set_name(newValue) self.name = newValue end function ScriptIAmMaking:get_name() return self.name end function ScriptIAmMaking:set_one(newValue) self.one = newValue end function ScriptIAmMaking:get_one() return self.one end return ScriptIAmMaking All of this is generated with these calls: local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() I am not sure if I am correct that this could be useful in certain situations. What I am looking for is feedback on the readability of the code, and following Lua best practices. I would also love to hear whether this approach is a valid one, and whether the way that I have done things will be extensible.

    Read the article

  • Securing WebSocket applications on Glassfish

    - by Pavel Bucek
    Today we are going to cover deploying secured WebSocket applications on Glassfish and access to these services using WebSocket Client API. WebSocket server application setup Our server endpoint might look as simple as this: @ServerEndpoint("/echo") public class EchoEndpoint { @OnMessage   public String echo(String message) {     return message + " (from your server)";   } } Everything else must be configured on container level. We can start with enabling SSL, which will require web.xml to be added to your project. For starters, it might look as following: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee">   <security-constraint>     <web-resource-collection>       <web-resource-name>Protected resource</web-resource-name>       <url-pattern>/*</url-pattern>       <http-method>GET</http-method>     </web-resource-collection>     <!-- https -->     <user-data-constraint>       <transport-guarantee>CONFIDENTIAL</transport-guarantee>     </user-data-constraint>   </security-constraint> </web-app> This is minimal web.xml for this task - web-resource-collection just defines URL pattern and HTTP method(s) we want to put a constraint on and user-data-constraint defines that constraint, which is in our case transport-guarantee. More information about these properties and security settings for web application can be found in Oracle Java EE 7 Tutorial. I have some simple webpage attached as well, so I can test my endpoint right away. You can find it (along with complete project) in Tyrus workspace: [webpage] [whole project]. After deploying this application to Glassfish Application Server, you should be able to hit it using your favorite browser. URL where my application resides is https://localhost:8181/sample-echo-https/ (may be different, depends on other configuration). My browser warns me about untrusted certificate (I use what freshly built Glassfish provides - self signed certificates) and after adding an exception for this site, I can see my webpage and I am able to securely connect to wss://localhost:8181/sample-echo-https/echo. WebSocket client Already mentioned demo application also contains test client, but execution of this is skipped for normal build. Reason for this is that Glassfish uses these self-signed "random" untrusted certificates and you are (in most cases) not able to connect to these services without any additional settings. Creating test WebSocket client is actually quite similar to server side, only difference is that you have to somewhere create client container and invoke connect with some additional info. Java API for WebSocket allows you to use annotated and programmatic way to construct endpoints. Server side shows the annotated case, so let's see how the programmatic approach will look. final WebSocketContainer client = ContainerProvider.getWebSocketContainer(); client.connectToServer(new Endpoint() {   @Override   public void onOpen(Session session, EndpointConfig EndpointConfig) {     try {       // register message handler - will just print out the       // received message on standard output.       session.addMessageHandler(new MessageHandler.Whole<String>() {       @Override         public void onMessage(String message) {          System.out.println("### Received: " + message);         }       });       // send a message       session.getBasicRemote().sendText("Do or do not, there is no try.");     } catch (IOException e) {       // do nothing     }   } }, ClientEndpointConfig.Builder.create().build(),    URI.create("wss://localhost:8181/sample-echo-https/echo")); This client should work with some secured endpoint with valid certificated signed by some trusted certificate authority (you can try that with wss://echo.websocket.org). Accessing our Glassfish instance will require some additional settings. You can tell Java which certificated you trust by adding -Djavax.net.ssl.trustStore property (and few others in case you are using linked sample). Complete command line when you are testing your service might need to look somewhat like: mvn clean test -Djavax.net.ssl.trustStore=$AS_MAIN/domains/domain1/config/cacerts.jks\ -Djavax.net.ssl.trustStorePassword=changeit -Dtyrus.test.host=localhost\ -DskipTests=false Where AS_MAIN points to your Glassfish instance. Note: you might need to setup keyStore and trustStore per client instead of per JVM; there is a way how to do it, but it is Tyrus proprietary feature: http://tyrus.java.net/documentation/1.2.1/user-guide.html#d0e1128. And that's it! Now nobody is able to "hear" what you are sending to or receiving from your WebSocket endpoint. There is always room for improvement, so the next step you might want to take is introduce some authentication mechanism (like HTTP Basic or Digest). This topic is more about container configuration so I'm not going to go into details, but there is one thing worth mentioning: to access services which require authorization, you might need to put this additional information to HTTP headers of first (Upgrade) request (there is not (yet) any direct support even for these fundamental mechanisms, user need to register Configurator and add headers in beforeRequest method invocation). I filed related feature request as TYRUS-228; feel free to comment/vote if you need this functionality.

    Read the article

  • Mscorlib mocking minus the attribute

    - by mehfuzh
    Mocking .net framework members (a.k.a. mscorlib) is always a daunting task. It’s the breed of static and final methods and full of surprises. Technically intercepting mscorlib members is completely different from other class libraries. This is the reason it is dealt differently. Generally, I prefer writing a wrapper around an mscorlib member (Ex. File.Delete(“abc.txt”)) and expose it via interface but that is not always an easy task if you already have years old codebase. While mocking mscorlib members first thing that comes to people’s mind is DateTime.Now. If you Google through, you will find tons of example dealing with just that. May be it’s the most important class that we can’t ignore and I will create an example using JustMock Q2 with the same. In Q2 2012, we just get rid of the MockClassAtrribute for mocking mscorlib members. JustMock is already attribute free for mocking class libraries. We radically think that vendor specific attributes only makes your code smelly and therefore decided the same for mscorlib. Now, I want to fake DateTime.Now for the following class: public class NestedDateTime { public DateTime GetDateTime() { return DateTime.Now; } } It is the simplest one that can be. The first thing here is that I tell JustMock “hey we have a DateTime.Now in NestedDateTime class that we want to mock”. To do so, during the test initialization I write this: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Mock.Replace(() => DateTime.Now).In<NestedDateTime>(x => x.GetDateTime());.csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I can also define it for all the members in the class, but that’s just a waste of extra watts. Mock.Replace(() => DateTime.Now).In<NestedDateTime>(); Now question, why should I bother doing it? The answer is that I am not using attribute and with this approach, I can mock any framework members not just File, FileInfo or DateTime. Here to note that we already mock beyond the three but when nested around a complex class, JustMock was not intercepting it correctly. Therefore, we decided to get rid of the attribute altogether fixing the issue. Finally, I write my test as usual. [TestMethod] public void ShouldAssertMockingDateTimeFromNestedClass() { var expected = new DateTime(2000, 1, 1); Mock.Arrange(() => DateTime.Now).Returns(expected); Assert.Equal(new NestedDateTime().GetDateTime(), expected); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s it, we are good. Now let me do the same for a random one, let’s say I want mock a member from DriveInfo: Mock.Replace<DriveInfo[]>(() => DriveInfo.GetDrives()).In<MsCorlibFixture>(x => x.ShouldReturnExpectedDriveWhenMocked()); Moving forward, I write my test: [TestMethod] public void ShouldReturnExpectedDriveWhenMocked() { Mock.Arrange(() => DriveInfo.GetDrives()).MustBeCalled(); DriveInfo.GetDrives(); Mock.Assert(()=> DriveInfo.GetDrives()); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here is one convention; you have to replace the mscorlib member before executing the target method that contains it. Here the call to DriveInfo is within the MsCorlibFixture therefore it should be defined during test initialization or before executing the test method. Hope this gives you the idea.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >