Search Results

Search found 8543 results on 342 pages for 'documentation'.

Page 334/342 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Zend_Navigation failing to load

    - by Grant Collins
    Hi, Following on from my earlier question, I am still having issues with loading the xml file into Zend_Navigation. I am now getting the following error message: <b>Fatal error</b>: Uncaught exception 'Zend_Navigation_Exception' with message 'Invalid argument: Unable to determine class to instantiate' in C:\www\mysite\development\website\library\Zend\Navigation\Page.php:223 I've tried to make my navigation.xml file look similar to the example on the Zend Documentation, However I just can't seem to get it to work. My XML file looks like this: <?xml version="1.0" encoding="UTF-8"?> <configdata> <navigation> <default> <label>Home</label> <controller>index</controller> <action>index</action> <module>default</module> <pages> <tour> <label>Tour</label> <controller>tour</controller> <action>index</action> <module>default</module> </tour> <blog> <label></label> <uri>http://blog.mysite.com</uri> </blog> <support> <label>Support</label> <controller>support</controller> <action>index</action> <module>default</module> </support> </pages> </default> <users> <label>Home</label> <controller>index</controller> <action>index</action> <module>users</module> <role>guser</role> <resource>owner</resource> <pages> <jobmanger> <label>Job Manager</label> <controller>jobmanager</controller> <action>index</action> <module>users</module> <role>guser</role> <resource>owner</resource> </jobmanger> <myaccount> <label>My Account</label> <controller>profile</controller> <action>index</action> <role>guser</role> <resource>owner</resource> <module>users</module> <pages> <detail> <label>Account Details</label> <controller>profile</controller> <action>detail</action> <module>users</module> <role>guser</role> <resource>owner</resource> <pages> <history> <label>Account History</label> <controller>profile</controller> <action>history</action> <module>users</module> <role>guser</role> <resource>owner</resource> </history> <password> <label>Change Password</label> <controller>profile</controller> <action>changepwd</action> <module>users</module> <role>employer</role> <resource>employers</resource> </password> </pages> </detail> ... </navigation> </configdata> Now I confess that I've totally got the wrong end of the stick with this, but rapidly running out of ideas, and it's been a long week. Thanks, Grant

    Read the article

  • Job conditions conflicting with personal principles on software-development - how much is too much?

    - by Baelnorn
    Sorry for the incoming wall'o'text (and for my probably bad English) but I just need to get this off somehow. I also accept that this question will be probably closed as subjective and argumentative, but I need to know one thing: "how much BS are programmers supposed to put up with before breaking?" My background I'm 27 years old and have a B.Sc. in Computer engineering with a graduation grade of 1.8 from a university of applied science. I went looking for a job right after graduation. I got three offers right away, with two offers paying vastly more than the last one, but that last one seemed more interesting so I went for that. My situation I've been working for the company now for 17 months now, but it feels like a drag more and more each day. Primarily because the company (which has only 5 other developers but me, and of these I work with 4) turned out to be pretty much the anti-thesis of what I expected (and was taught in university) from a modern software company. I agreed to accept less than half of the usual payment appropriate for my qualification for the first year because I was promised a trainee program. However, the trainee program turned out to be "here you got a computer, there's some links on the stuff we use, and now do what you colleagues tell you". Further, during my whole time there (trainee or not) I haven't been given the grace of even a single code-review - apparently nobody's interested in my work as long as it "just works". I was told in the job interview that "Microsoft technology played a central role in the company" yet I've been slowly eroding my congnitive functions with Flex/Actionscript/Cairngorm ever since I started (despite having applied as a C#/.NET developer). Actually, the company's primary projects are based on Java/XSLT and Flex/Actionscript (with some SAP/ABAP stuff here and there but I'm not involved in that) and they've been working on these before I even applied. Having had no experience either with that particular technology nor the framework nor the field (RIA) nor in developing business scale applications I obviously made several mistakes. However, my boss told me that he let me make those mistakes (which ate at least 2 months of development time on their own) on purpose to provide some "learning experience". Even when I was still a trainee I was already tasked with working on a business-critical application. On my own. Without supervision. Without code-reviews. My boss thinks agile methods are a waste of time/money and deems putting more than one developer on any project not efficient. Documentation is not necessary and each developer should only document what he himself needs for his work. Recently he wanted us to do bug tracking with Excel and Email instead of using an already existing Bugzilla, overriding an unanimous decision made by all developers and testers involved in the process - only after another senior developer had another hour-long private discussion with him he agreed to let us use the bugtracker. Project management is basically not present, there are only a few Excel sheets floating around where the senior developer lists some things (not all, mind you) with a time estimate ranging from days to months, trying to at least somehow organize the whole mess. A development process is also basically not present, each developer just works on his own however he wants. There are not even coding conventions in the company. Testing is done manually with a single tester (sometimes two testers) per project because automated testing wasn't given the least thought when the whole project was started. I guess it's not a big surprise when I say that each developer also has his own share of hundreds of overhours (which are, of course, unpaid). Each developer is tasked with working on his own project(s) which in turn leads to a very extensive knowledge monopolization - if one developer was to have an accident or become ill there would be absolutely no one who could even hope to do his work. Considering that each developer has his own business-critical application to work on, I guess that's a pretty bad situation. I've been trying to change things for the better. I tried to introduce a development process, but my first attempt was pretty much shot down by my boss with "I don't want to discuss agile methods". After that I put together a process that at least resembled how most of the developers were already working and then include stuff like automated (or at least organized) testing, coding conventions, etc. However, this was also shot down because it wasn't "simple" enought to be shown on a business slide (actually, I wasn't even given the 15 minutes I'd have needed to present the process in the meeting). My problem I can't stand working there any longer. Seriously, I consider to resign on monday, which still leaves me with 3 months to work there due to the cancelation period. My primary goal since I started studying computer science was being a good computer scientist, working with modern technologies and adhering to modern and proven principles and methods. However, the company I'm working for seems to make that impossible. Some days I feel as if was living in a perverted real-life version of the Dilbert comics. My question Am I overreacting? Is this the reality each graduate from university has to face? Should I betray my sound principles and just accept these working conditions? Or should I gtfo of there? What's the opinion of other developers on this matter. Would you put up with all that stuff?

    Read the article

  • [ebp + 6] instead of +8 in a JIT compiler

    - by David Titarenco
    I'm implementing a simplistic JIT compiler in a VM I'm writing for fun (mostly to learn more about language design) and I'm getting some weird behavior, maybe someone can tell me why. First I define a JIT "prototype" both for C and C++: #ifdef __cplusplus typedef void* (*_JIT_METHOD) (...); #else typedef (*_JIT_METHOD) (); #endif I have a compile() function that will compile stuff into ASM and stick it somewhere in memory: void* compile (void* something) { // grab some memory unsigned char* buffer = (unsigned char*) malloc (1024); // xor eax, eax // inc eax // inc eax // inc eax // ret -> eax should be 3 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0x40; buffer[5] = 0x67; buffer[6] = 0x40; buffer[7] = 0x67; buffer[8] = 0x40; buffer[9] = 0xC3; */ // xor eax, eax // mov eax, 9 // ret 4 -> eax should be 9 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0xB8; buffer[5] = 0x09; buffer[6] = 0x00; buffer[7] = 0x00; buffer[8] = 0x00; buffer[9] = 0xC3; */ // push ebp // mov ebp, esp // mov eax, [ebp + 6] ; wtf? shouldn't this be [ebp + 8]!? // mov esp, ebp // pop ebp // ret -> eax should be the first value sent to the function /* WORKS! */ buffer[0] = 0x66; buffer[1] = 0x55; buffer[2] = 0x66; buffer[3] = 0x89; buffer[4] = 0xE5; buffer[5] = 0x66; buffer[6] = 0x66; buffer[7] = 0x8B; buffer[8] = 0x45; buffer[9] = 0x06; buffer[10] = 0x66; buffer[11] = 0x89; buffer[12] = 0xEC; buffer[13] = 0x66; buffer[14] = 0x5D; buffer[15] = 0xC3; // mov eax, 5 // add eax, ecx // ret -> eax should be 50 /* WORKS! buffer[0] = 0x67; buffer[1] = 0xB8; buffer[2] = 0x05; buffer[3] = 0x00; buffer[4] = 0x00; buffer[5] = 0x00; buffer[6] = 0x66; buffer[7] = 0x01; buffer[8] = 0xC8; buffer[9] = 0xC3; */ return buffer; } And finally I have the main chunk of the program: void main (int argc, char **args) { DWORD oldProtect = (DWORD) NULL; int i = 667, j = 1, k = 5, l = 0; // generate some arbitrary function _JIT_METHOD someFunc = (_JIT_METHOD) compile(NULL); // windows only #if defined _WIN64 || defined _WIN32 // set memory permissions and flush CPU code cache VirtualProtect(someFunc,1024,PAGE_EXECUTE_READWRITE, &oldProtect); FlushInstructionCache(GetCurrentProcess(), someFunc, 1024); #endif // this asm just for some debugging/testing purposes __asm mov ecx, i // run compiled function (from wherever *someFunc is pointing to) l = (int)someFunc(i, k); // did it work? printf("result: %d", l); free (someFunc); _getch(); } As you can see, the compile() function has a couple of tests I ran to make sure I get expected results, and pretty much everything works but I have a question... On most tutorials or documentation resources, to get the first value of a function passed (in the case of ints) you do [ebp+8], the second [ebp+12] and so forth. For some reason, I have to do [ebp+6] then [ebp+10] and so forth. Could anyone tell me why?

    Read the article

  • Modelling boost::Lockable with semaphore rather than mutex (previously titled: Unlocking a mutex fr

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect. EDIT: The kind of behavior I want is basically as follows. I have an object, and it needs to be locked whenever it is modified. I want to lock the object from one thread, and do some work on it. Then I want to keep the object locked while I tell another worker thread to complete the work. So the first thread can go on and do something else while the worker thread finishes up. When the worker thread gets done, it unlocks the mutex. And I want the transition to be seemless so nobody else can get the mutex lock in between when thread 1 starts the work and thread 2 completes it. Something like inter_thread_mutex seems like it would work, and it would also allow the program to interact with it as if it were an ordinary mutex. So it seems like a clean solution. If there's a better solution, I'd be happy to hear that also. EDIT AGAIN: The reason I need locks to begin with is that there are multiple master threads, and the locks are there to prevent them from accessing shared objects concurrently in invalid ways. So the code already uses loop-level lock-free sequencing of operations at the master thread level. Also, in the original implementation, there were no worker threads, and the mutexes were ordinary kosher mutexes. The inter_thread_thingy came up as an optimization, primarily to improve response time. In many cases, it was sufficient to guarantee that the "first part" of operation A, occurs before the "first part" of operation B. As a dumb example, say I punch object 1 and give it a black eye. Then I tell object 1 to change it's internal structure to reflect all the tissue damage. I don't want to wait around for the tissue damage before I move on to punch object 2. However, I do want the tissue damage to occur as part of the same operation; for example, in the interim, I don't want any other thread to reconfigure the object in such a way that would make tissue damage an invalid operation. (yes, this example is imperfect in many ways, and no I'm not working on a game) So we made the change to a model where ownership of an object can be passed to a worker thread to complete an operation, and it actually works quite nicely; each master thread is able to get a lot more operations done because it doesn't need to wait for them all to complete. And, since the event sequencing at the master thread level is still loop-based, it is easy to write high-level master-thread operations, as they can be based on the assumption that an operation is complete when the corresponding function call returns. Finally, I thought it would be nice to use inter_thread mutex/semaphore thingies using RAII with boost locks to encapsulate the necessary synchronization that is required to make the whole thing work.

    Read the article

  • What happens when you click a button using WebRat under cucumber

    - by Peter Tillemans
    I am trying to login to a Java web application. The login page has the following html : <html> <head><title>Login Page</title></head> <body onload='document.f.j_username.focus();'> <h3>Login with Username and Password</h3> <form name='f' action='/ui/j_spring_security_check' method='POST'> <table> <tr><td>User:</td><td><input type='text' name='j_username' value=''></td></tr> <tr><td>Password:</td><td><input type='password' name='j_password'/></td></tr> <tr> <td><input type='checkbox' name='_spring_security_remember_me'/> </td> <td>Remember me on this computer.</td> </tr> <tr><td colspan='2'><input name="submit" type="submit"/></td></tr> <tr><td colspan='2'><input name="reset" type="reset"/></td></tr> </table> </form> </body> </html> I use the following script: Given /^I am logged in as (.*) with password (.*)$/ do | user, password | visit "http://localhost:8080/ui" click_link "Projects" puts "Response Body:" puts response.body assert_contain "User:" fill_in "j_username", :with => user fill_in "j_password", :with => password puts "Response Body:" puts response.body click_button puts "Response Body:" puts response.body end This gives the following in the log file : [INFO] Response Body: [INFO] <html><head><title>Login Page</title></head><body onload='document.f.j_username.focus();'> [INFO] <h3>Login with Username and Password</h3><form name='f' action='/ui/j_spring_security_check' method='POST'> [INFO] <table> [INFO] <tr><td>User:</td><td><input type='text' name='j_username' value=''></td></tr> [INFO] <tr><td>Password:</td><td><input type='password' name='j_password'/></td></tr> [INFO] <tr><td><input type='checkbox' name='_spring_security_remember_me'/></td><td>Remember me on this computer.</td></tr> [INFO] <tr><td colspan='2'><input name="submit" type="submit"/></td></tr> [INFO] <tr><td colspan='2'><input name="reset" type="reset"/></td></tr> [INFO] </table> [INFO] </form></body></html> [INFO] Response Body: [INFO] <html><head><title>Login Page</title></head><body onload='document.f.j_username.focus();'> [INFO] <h3>Login with Username and Password</h3><form name='f' action='/ui/j_spring_security_check' method='POST'> [INFO] <table> [INFO] <tr><td>User:</td><td><input type='text' name='j_username' value=''></td></tr> [INFO] <tr><td>Password:</td><td><input type='password' name='j_password'/></td></tr> [INFO] <tr><td><input type='checkbox' name='_spring_security_remember_me'/></td><td>Remember me on this computer.</td></tr> [INFO] <tr><td colspan='2'><input name="submit" type="submit"/></td></tr> [INFO] <tr><td colspan='2'><input name="reset" type="reset"/></td></tr> [INFO] </table> [INFO] </form></body></html> [INFO] Response Body: [INFO] [INFO] Given I am logged in as pti with password ptipti # features/step_definitions/authentication_tests.rb:2 So apparently the response.body disappeared after clicking the submit button. I can see from the server log files that the script does not arrive on the Project page. I am new to webrat and quite new to ruby and I am now thoroughly confused. I have no idea why the response.body is gone. I have no idea where I am. I speculated that I had to wait for the page request, but all documentation says that webrat nicely waits till all redirects, pageloads, etc are finished. (At least I think I read that). Besides I find no method to wait for the page in the webrat API. Can someone give some tips on how to proceed with debugging this?

    Read the article

  • PHP mtChart (new pChart): how do i control the angle of x-axis labels?

    - by gsquare567
    i am trying to graph the results of a survey, where the question is multiple choice. eg. How would you describe this website? format: option | number of times selected | percentage of users who selected that option Informative 1 50% All of the above 1 50% Interesting 0 0% Intelligent 0 0% Cool 0 0% Incredible 0 0% Sleek 0 0% Amazing the graph is a bar graph, where each bar represents one of those options, and the height of the bar depends on the number of times selected. however, the labels are slanted at a 45 degree angle and are barely readable! here is my code: <?php require_once ("includes/common.php"); require_once ("graph/mtChart.min.php"); // type must be specified $type = $_GET['type']; if($type == "surveys_report_MC_or_CB") { // PARAMS $surveyID = $_GET['surveyID']; $questionID = $_GET['questionID']; // END PARAMS $question = SurveyQuestions::getSingle($questionID); $answers = SurveyAnswers::getAll($questionID); $options = SurveyQuestionOptions::getAll($question[SurveyQuestions::surveyQuestionID]); $others = SurveyAnswers::setOptionCounts($options, $answers); $printedOthers = false; // set graph $values = array(); $axisLabels = array(); foreach($options as $option) { $values[$option[SurveyQuestionOptions::optionText]] = $option['count']; $axisLabels[] = $option[SurveyQuestionOptions::optionText]; } $graphs = array(); $graphs[0] = $values; $xName = "Option"; $yName = "Number of Times Selected"; $graphTitle = $question[SurveyQuestions::question]; $series = array("Total"); $showLegend = false; $tall = false; } drawGraph($graphs, $axisLabels, $xName, $yName, $graphTitle, $series, $showLegend, $tall); function drawGraph($graphs, $axisLabels, $xName, $yName, $graphTitle, $series, $showLegend, $tall) { $Graph = ($tall) ? new mtChart(575,375) : new mtChart(575,275); // Dataset definition $avg = 0; $i = 0; foreach ($graphs as $key => $value) { $Graph->AddPoint($value,"series" . $key); $Graph->SetSerieName($series[$key],"series" . $key); // Get average $avg += array_sum($value); $size = sizeof($value); $i += $size; // Calculate x-axis tick interval $step = ceil($size / 25); } $Graph->AddPoint($axisLabels,"XLabel"); $Graph->AddAllSeries(); $Graph->RemoveSerie("XLabel"); $Graph->SetAbsciseLabelSerie("XLabel"); $Graph->SetXAxisName($xName); $Graph->SetYAxisName($yName); // Get from cache if it exists $Graph->enableCaching(NULL, 'graph/cache/'); $Graph->GetFromCache(); // Initialize the graph $Graph->setInterval($step); $Graph->setFontProperties("graph/tahoma.ttf",8); ($showLegend) ? $Graph->setGraphArea(45,30,475,200) : $Graph->setGraphArea(75,30,505,200); $Graph->drawGraphArea(255,255,255,TRUE); $Graph->drawScale(SCALE_START0,100,100,100,TRUE,55,1,TRUE); $Graph->drawGrid(4,TRUE,230,230,230,50); // Draw the 0 line $Graph->setFontProperties("graph/tahoma.ttf",6); $Graph->drawTreshold(0,143,55,72,TRUE,TRUE); // Draw the bar graph $Graph->drawBarGraph(); // Draw average line $Graph->drawTreshold($avg/$i, 0, 0, 0, FALSE, FALSE, 5); // Finish the graph $Graph->setFontProperties("graph/tahoma.ttf",8); if ($showLegend) { $Graph->drawLegend(482,30,255,255,255,255,255,255,100,100,100); } $Graph->setFontProperties("graph/tahoma.ttf",10); $Graph->drawTitle(0,22,$graphTitle,100,100,100,555); // Draw Graph $Graph->Stroke(); } and here is where i use it on the page: <div class="graph_container"> <img src="drawGraph.php?type=surveys_report_MC_or_CB&surveyID=<?php echo $survey[Surveys::surveyID] ?>&questionID=<?php echo $question[SurveyQuestions::surveyQuestionID] ?>" /> is there a setting i can apply to the graph which will make the text look nicer, or at least let me set the angle to 90 degrees so people can read it if they cock their head to the left? thanks! btw, mtchart is located here: http://code.google.com/p/mtchart/ and pchart (the original, which has mainly the same code) is here: http://pchart.sourceforge.net/documentation.php

    Read the article

  • What does Ruby have that Python doesn't, and vice versa?

    - by Lennart Regebro
    There is a lot of discussions of Python vs Ruby, and I all find them completely unhelpful, because they all turn around why feature X sucks in language Y, or that claim language Y doesn't have X, although in fact it does. I also know exactly why I prefer Python, but that's also subjective, and wouldn't help anybody choosing, as they might not have the same tastes in development as I do. It would therefore be interesting to list the differences, objectively. So no "Python's lambdas sucks". Instead explain what Ruby's lambdas can do that Python's can't. No subjectivity. Example code is good! Don't have several differences in one answer, please. And vote up the ones you know are correct, and down those you know are incorrect (or are subjective). Also, differences in syntax is not interesting. We know Python does with indentation what Ruby does with brackets and ends, and that @ is called self in Python. UPDATE: This is now a community wiki, so we can add the big differences here. Ruby has a class reference in the class body In Ruby you have a reference to the class (self) already in the class body. In Python you don't have a reference to the class until after the class construction is finished. An example: class Kaka puts self end self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python. All classes are mutable in Ruby This lets you develop extensions to core classes. Here's an example of a rails extension: class String def starts_with?(other) head = self[0, other.length] head == other end end Ruby has Perl-like scripting features Ruby has first class regexps, $-variables, the awk/perl line by line input loop and other features that make it more suited to writing small shell scripts that munge text files or act as glue code for other programs. Ruby has first class continuations Thanks to the callcc statement. In Python you can create continuations by various techniques, but there is no support built in to the language. Ruby has blocks With the "do" statement you can create a multi-line anonymous function in Ruby, which will be passed in as an argument into the method in front of do, and called from there. In Python you would instead do this either by passing a method or with generators. Ruby: amethod { |here| many=lines+of+code goes(here) } Python: def function(here): many=lines+of+code goes(here) amethod(function) Interestingly, the convenience statement in Ruby for calling a block is called "yield", which in Python will create a generator. Ruby: def themethod yield 5 end themethod do |foo| puts foo end Python: def themethod(): yield 5 for foo in themethod: print foo Although the principles are different, the result is strikingly similar. Python has built-in generators (which are used like Ruby blocks, as noted above) Python has support for generators in the language. In Ruby you could use the generator module that uses continuations to create a generator from a block. Or, you could just use a block/proc/lambda! Moreover, in Ruby 1.9 Fibers are, and can be used as, generators. docs.python.org has this generator example: def reverse(data): for index in range(len(data)-1, -1, -1): yield data[index] Contrast this with the above block examples. Python has flexible name space handling In Ruby, when you import a file with require, all the things defined in that file will end up in your global namespace. This causes namespace pollution. The solution to that is Rubys modules. But if you create a namespace with a module, then you have to use that namespace to access the contained classes. In Python, the file is a module, and you can import its contained names with from themodule import *, thereby polluting the namespace if you want. But you can also import just selected names with from themodule import aname, another or you can simply import themodule and then access the names with themodule.aname. If you want more levels in your namespace you can have packages, which are directories with modules and an __init__.py file. Python has docstrings Docstrings are strings that are attached to modules, functions and methods and can be introspected at runtime. This helps for creating such things as the help command and automatic documentation. def frobnicate(bar): """frobnicate takes a bar and frobnicates it >>> bar = Bar() >>> bar.is_frobnicated() False >>> frobnicate(bar) >>> bar.is_frobnicated() True """ Python has more libraries Python has a vast amount of available modules and bindings for libraries. Python has multiple inheritance Ruby does not ("on purpose" -- see Ruby's website, see here how it's done in Ruby). It does reuse the module concept as a sort of abstract classes. Python has list/dict comprehensions Python: res = [x*x for x in range(1, 10)] Ruby: res = (0..9).map { |x| x * x } Python: >>> (x*x for x in range(10)) <generator object <genexpr> at 0xb7c1ccd4> >>> list(_) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] Ruby: p = proc { |x| x * x } (0..9).map(&p) Python: >>> {x:str(y*y) for x,y in {1:2, 3:4}.items()} {1: '4', 3: '16'} Ruby: >> Hash[{1=>2, 3=>4}.map{|x,y| [x,(y*y).to_s]}] => {1=>"4", 3=>"16"} Python has decorators Things similar to decorators can be created in Ruby, and it can also be argued that they aren't as necessary as in Python.

    Read the article

  • Repeating Group Messages in Quickfix C++

    - by Mark Jackson
    We cannot seem to process some group messages with QuickFix. I am trying to set up a connection with the ICE exchange using QuickFix (C++). I have created a custom data dictionary to handle ICE's non-standard messages. The first message to handle is a SecurityDefinition. The message contains about 13000 entries broken into blocks of 100. I attached the message below (the first two entries with CR/LF added for clarity). My question is in the data dictionary, I defined a group as part of the entry with all the fields they specify in the group. Yet the message is rejected before it gets to the cracker as having an invalid tag (tag = 305). Message 2 Rejected: Tag not defined for this message type:305 Does this dictionary entry look correct. Is there any documentation anywhere on how to handle group messages? Dictionary entry <message name='SecurityDefinition' msgcat='app' msgtype='d'> <field name='SecurityResponseID' required='Y' /> <field name='SecurityResponseType' required='Y' /> <field name='SecurityReqID' required='Y' /> <field name='TotNoRelatedSym' required='N' /> <field name='NoRpts' required='N' /> <field name='ListSeqNo' required='N' /> <group name='NoUnderlyings' required='N'> <field name='UnderlyingSymbol' required='N' /> <field name='UnderlyingSecurityID' required='N' /> <field name='UnderlyingSecurityIDSource' required='N' /> <field name='UnderlyingCFICode' required='N' /> <field name='UnderlyingSecurityDesc' required='N' /> <field name='UnderlyingMaturityDate' required='N' /> <field name='UnderlyingContractMultiplier' required='N' /> <field name='IncrementPrice' required='N' /> <field name='IncrementQty' required='N' /> <field name='LotSize' required='N' /> <field name='NumofCycles' required='N' /> <field name='LotSizeMultiplier' required='N' /> <field name='Clearable' required='N' /> <field name='StripId' required='N' /> <field name='StripType' required='N' /> <field name='StripName' required='N' /> <field name='HubId' required='N' /> <field name='HubName' required='N' /> <field name='HubAlias' required='N' /> <field name='UnderlyingUnitOfMeasure' required='N' /> <field name='PriceDenomination' required='N' /> <field name='PriceUnit' required='N' /> <field name='Granularity' required='N' /> <field name='NumOfDecimalPrice' required='N' /> <field name='NumOfDecimalQty' required='N' /> <field name='ProductId' required='N' /> <field name='ProductName' required='N' /> <field name='ProductDescription' required='N' /> <field name='TickValue' required='N' /> <field name='ImpliedType' required='N' /> <field name='PrimaryLegSymbol' required='N' /> <field name='SecondaryLegSymbol' required='N' /> <field name='IncrementStrike' required='N' /> <field name='MinStrike' required='N' /> <field name='MaxStrike' required='N' /> </group> </message> The actual message is 8=FIX.4.49=5004335=d49=ICE34=252=20121017-00:39:41.38556=600357=23322=3924323=4320=1393=1310382=13267=1711=100 311=1705282309=TEB SMG0013-TFL SMG0013305=8463=FXXXXX307=NG Basis Futures Spr - TETCO-ELA/TGP-500L - Feb13542=20130131436=1.09013=0.00059014=2500.09017=25009022=289024=19025=Y916=20130201917=201302289201=11969200=129202=Feb139300=60589301=Texas Eastern Transmission Corp. - East Louisiana Zone/Tennessee Gas Pipeline Co. - Zone L, 500 Leg Pool9302=TETCO-ELA/TGP-500L998=MMBtus9100=USD9101=USD / MMBtu9085=daily9083=49084=09061=4909062=NG Basis Futures Spr9063=Natural Gas Basis Futures Spread9032=1.259004=17051939005=1353778 311=1714677309=PGE SQF0014.H0014-SCB SQF0014.H0014305=8463=FXXXXX307=NG Basis Futures Spr - PG&E-Citygate/Socal-Citygate - Q1 14542=20131231436=1.09013=0.00059014=2500.09017=25009022=909024=19025=Y916=20140101917=201403319201=12339200=159202=Q1 149300=59979301=PG&E - Citygate/Socal - Citygate9302=PG&E-Citygate/Socal-Citygate998=MMBtus9100=USD9101=USD / MMBtu9085=daily9083=49084=09061=4909062=NG Basis Futures Spr9063=Natural Gas Basis Futures Spread9032=1.259004=13430529005=1344660

    Read the article

  • Expression Web 4 - Master Page Error

    - by Eric J.
    I created an ASP.Net Web Application in VS 2010. That in turn creates an example Site.Master, Default.aspx, and several other example files. I then opened Default.aspx in Expression Web 4 and get the error message The Master Page file 'Site.Master' cannot be loaded. Default.aspx can still be displayed fine in VS 2010. Source.Master: <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.master.cs" Inherits="SampleWebApp.SiteMaster" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head runat="server"> <title></title> <link href="~/Styles/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeadContent" runat="server"> </asp:ContentPlaceHolder> <style type="text/css"> .style1 { font-family: Tunga; } </style> </head> <body> <form runat="server"> <div class="page"> <div class="header"> <div class="title"> <h1 class="style1"> My Application Master Page</h1> </div> <div class="loginDisplay"> <asp:LoginView ID="HeadLoginView" runat="server" EnableViewState="false"> <AnonymousTemplate> [ <a href="~/Account/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> ] </AnonymousTemplate> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/"/> ] </LoggedInTemplate> </asp:LoginView> </div> <div class="clear hideSkiplink"> <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" EnableViewState="false" IncludeStyleBlock="false" Orientation="Horizontal"> <Items> <asp:MenuItem NavigateUrl="~/Default.aspx" Text="Home"> <asp:MenuItem NavigateUrl="~/Home/NewItem.aspx" Text="New Item" Value="New Item"></asp:MenuItem> <asp:MenuItem NavigateUrl="~/Home/AnotherItem.aspx" Text="Another Item" Value="Another Item"></asp:MenuItem> </asp:MenuItem> <asp:MenuItem NavigateUrl="~/About.aspx" Text="About"/> <asp:MenuItem NavigateUrl="~/ContactUs.aspx" Text="ContactUs" Value="ContactUs"> </asp:MenuItem> </Items> </asp:Menu> </div> </div> <div class="main"> <asp:ContentPlaceHolder ID="MainContent" runat="server"/> </div> <div class="clear"> </div> </div> <div class="footer"> </div> </form> </body> </html> Default.aspx: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="SampleWebApp._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> <style type="text/css"> .style2 { color: #669900; } .style3 { background-color: #FFFFCC; } </style> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to MY page! </h2> <p> To learn more <span class="style2"><strong><em><span class="style3">about</span></em></strong></span> ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. </p> <p> You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. </p> </asp:Content> Any idea how to get the master page to work properly in Expression Web 4?

    Read the article

  • How do you get XML::Pastor to set xsi:type for programmatically generated elements?

    - by Derrick
    I'm learning how to use Perl as an automation test framework tool for a Java web service and running into trouble generating xml requests from the Pastor generated modules. The problem is that when including a type that extends from the required type for an element, the xsi:type is not included in the generated xml string. Say, for example, I want to generate the following xml request from the modules that XML::Pastor generated from my xsd: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <PromptAnswersRequest xmlns="http://mycompany.com/api"> <Uri>/some/url</Uri> <User ref="1"/> <PromptAnswers> <PromptAnswer xsi:type="textPromptAnswer" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Prompt ref="2"/> <Children> <PromptAnswer xsi:type="choicePromptAnswer"> <Prompt ref="1"/> <Choice ref="2"/> </PromptAnswer> </Children> <Value>totally</Value> </PromptAnswer> </PromptAnswers> </PromptAnswersRequest> What I'm getting currently is this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <PromptAnswersRequest xmlns="http://mycompany.com/api"> <Uri>/some/url</Uri> <User ref="1"/> <PromptAnswers> <PromptAnswer> <Prompt ref="2"/> <Children> <PromptAnswer> <Prompt ref="1"/> <Choice ref="2"/> </PromptAnswer> </Children> <Value>totally</Value> </PromptAnswer> </PromptAnswers> </PromptAnswersRequest> Here are some relavent snippets from the xsd: <xs:complexType name="request"> <xs:sequence> <xs:element name="Uri" type="xs:anyURI"/> </xs:sequence> </xs:complexType> <xs:complexType name="promptAnswersRequest"> <xs:complexContent> <xs:extension base="api:request"> <xs:sequence> <xs:element name="User" type="api:ref"/> <xs:element name="PromptAnswers" type="api:promptAnswerList"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="promptAnswerList"> <xs:sequence> <xs:element name="PromptAnswer" type="api:promptAnswer" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="promptAnswer" abstract="true"> <xs:sequence> <xs:element name="Prompt" type="api:ref"/> <xs:element name="Children" type="api:promptAnswerList" minOccurs="0"/> </xs:sequence> </xs:complexType> <xs:complexType name="textPromptAnswer"> <xs:complexContent> <xs:extension base="promptAnswer"> <xs:sequence> <xs:element name="Value" type="api:nonEmptyString" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> And here are relavent parts of the script: my $promptAnswerList = new My::API::Type::promptAnswerList; my @promptAnswers; my $promptAnswerList2 = new My::API::Type::promptAnswerList; my @textPromptAnswerChildren; my $textPromptAnswer = new My::API::Type::textPromptAnswer; my $textPromptAnswerRef = new My::API::Type::ref; $textPromptAnswerRef->ref('2'); $textPromptAnswer->Prompt($textPromptAnswerRef); my $choicePromptAnswer = new My::API::Type::choicePromptAnswer; my $choicePromptAnswerPromptRef = new My::API::Type::ref; my $choicePromptAnswerChoiceRef = new My::API::Type::ref; $choicePromptAnswerPromptRef->ref('1'); $choicePromptAnswerChoiceRef->ref('2'); $choicePromptAnswer->Prompt($choicePromptAnswerPromptRef); $choicePromptAnswer->Choice($choicePromptAnswerChoiceRef); push(@textPromptAnswerChildren, $choicePromptAnswer); $promptAnswerList2->PromptAnswer(@textPromptAnswerChildren); $textPromptAnswer->Children($promptAnswerList2); $textPromptAnswer->Value('totally'); push(@promptAnswers, $pulseTextPromptAnswer); push(@promptAnswers, $textPromptAnswer); I haven't seen this addressed anywhere in the documentation for the XML::Pastor modules, so if anyone can point me at a good reference for its use it would be greatly appreciated. Also, I'm only using XML::Pastor because I don't know of any other modules that can do this, so if any of you know of something either easier to use, or more well maintained, please let me know about that too!

    Read the article

  • How do I make a simple image-based button with visual states in Silverlight 3?

    - by Jacob
    At my previous company, we created our RIAs using Flex with graphical assets created in Flash. In Flash, you could simply lay out your graphics for different states, i.e. rollover, disabled. Now, I'm working on a Silverlight 3 project. I've been given a bunch of images that need to serve as the graphics for buttons that have a rollover, pressed, and normal state. I cannot figure out how to simply create buttons with different images for different visual states in Visual Studio 2008 or Expression Blend 3. Here's where I am currently. My button is defined like this in the XAML: <Button Style="{StaticResource MyButton}"/> The MyButton style appears as follows: <Style x:Key="MyButton" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Image Source="/Assets/Graphics/mybtn_up.png" Width="54" Height="24"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"/> <VisualState x:Name="Pressed"/> <VisualState x:Name="Disabled"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </ControlTemplate> </Setter.Value> </Setter> </Style> I cannot figure out how to assign a different template to different states, nor how to change the image's source based on which state I'm in. How do I do this? Also, if you know of any good documentation that describes how styles work in Silverlight, that would be great. All of the search results I can come up with are frustratingly unhelpful. Edit: I found a way to change the image via storyboards like this: <Style x:Key="MyButton" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Image Source="/Assets/Graphics/mybtn_up.png" Width="54" Height="24" x:Name="Image"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"> <Storyboard Storyboard.TargetName="Image" Storyboard.TargetProperty="Source"> <ObjectAnimationUsingKeyFrames> <DiscreteObjectKeyFrame KeyTime="0" Value="/Assets/Graphics/mybtn_over.png"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Pressed"> <Storyboard Storyboard.TargetName="Image" Storyboard.TargetProperty="Source"> <ObjectAnimationUsingKeyFrames> <DiscreteObjectKeyFrame KeyTime="0" Value="/Assets/Graphics/mybtn_active.png"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Disabled"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </ControlTemplate> </Setter.Value> </Setter> </Style> However, this seems like a strange way of doing things to me. Is there a more standard way of accomplishing this?

    Read the article

  • New hire expectations... (Am I being unreasonable?)

    - by user295841
    I work for a very small custom software shop. We currently consist me and my boss. My boss is an old FoxPro DOS developer and OOP makes him uncomfortable. He is planning on taking a back seat in the next few years to hopefully enjoy a “partial retirement”. I will be taking over the day to day operations and we are now desperately looking for more help. We tried Monster.com, Dice.com, and others a few years ago when we started our search. We had no success. We have tried outsourcing overseas (total disaster), hiring kids right out of college (mostly a disaster but that’s where I came from), interns (good for them, not so good for us) and hiring laid off “experienced” developers (there was a reason they were laid off). I have heard hiring practices discussed on podcasts, blogs, etc... and have tried a few. The “Fizz Buzz” test was a good one. One kid looked physically ill before he finally gave up. I think my problem is that I have grown so much as a developer since I started here that I now have a high standard. I hear/read very intelligent people podcasts and blogs and I know that there are lots of people out there that can do the job. I don’t want to settle for less than a “good” developer. Perhaps my expectations are unreasonable. I expect any good developer (entry level or experienced) to be billable (at least paying their own wage) in under one month. I expect any good developer to be able to be productive (at least dangerous) in any language or technology with only a few days of research/training. I expect any good developer to be able to take a project from initial customer request to completion with little or no help from others. Am I being unreasonable? What constitutes a valuable developer? What should be expected of an entry level developer? What should be expected of an experienced developer? I realize that everyone is different but there has to be some sort of expectations standard, right? I have been giving the test project below to potential canidates to weed them out. Good idea? Too much? Too little? Please let me know what you think. Thanks. Project ID: T00001 Description: Order Entry System Deadline: 1 Week Scope The scope of this project is to develop a fully function order entry system. Screen/Form design must be user friendly and promote efficient data entry and modification. User experience (Navigation, Screen/Form layouts, Look and Feel…) is at the developer’s discretion. System may be developed using any technologies that conform to the technical and system requirements. Deliverables Complete source code Database setup instructions (Scripts or restorable backup) Application installation instructions (Installer or installation procedure) Any necessary documentation Technical Requirements Server Platform – Windows XP / Windows Server 2003 / SBS Client Platform – Windows XP Web Browser (If applicable) – IE 8 Database – At developer’s discretion (Must be a relational SQL database.) Language – At developer’s discretion All data must be normalized. (+) All data must maintain referential integrity. (++) All data must be indexed for optimal performance. System must handle concurrency. System Requirements Customer Maintenance Customer records must have unique ID. Customer data will include Name, Address, Phone, etc. User must be able to perform all CRUD (Create, Read, Update, and Delete) operations on the Customer table. User must be able to enter a specific Customer ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a customer to edit. Validation must be performed prior to database commit. Customer record cannot be deleted if the customer has an order in the system. (++) Inventory Maintenance Part records must have unique ID. Part data will include Description, Price, UOM (Unit of Measure), etc. User must be able to perform all CRUD operations on the part table. User must be able to enter a specific Part ID to edit. User must be able to pull up a sortable/queryable search grid/utility to find a part to edit. Validation must be performed prior to database commit. Part record cannot be deleted if the part has been used in an order. (++) Order Entry Order records must have a unique auto-incrementing key (Order Number). Order data must be split into a header/detail structure. (+) Order can contain an infinite number of detail records. Order header data will include Order Number, Customer ID (++), Order Date, Order Status (Open/Closed), etc. Order detail data will include Part Number (++), Quantity, Price, etc. User must be able to perform all CRUD operations on the order tables. User must be able to enter a specific Order Number to edit. User must be able to pull up a sortable/queryable search grid/utility to find an order to edit. User must be able to print an order form from within the order entry form. Validation must be performed prior to database commit. Reports Customer Listing – All Customers in the system. Inventory Listing – All parts in the system. Open Order Listing – All open orders in system. Customer Order Listing – All orders for specific customer. All reports must include sorts and filter functions where applicable. Ex. Customer Listing by range of Customer IDs. Open Order Listing by date range.

    Read the article

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • Condition Variable in Shared Memory - is this code POSIX-conformant?

    - by GrahamS
    We've been trying to use a mutex and condition variable to synchronise access to named shared memory on a LynuxWorks LynxOS-SE system (POSIX-conformant). One shared memory block is called "/sync" and contains the mutex and condition variable, the other is "/data" and contains the actual data we are syncing access to. We're seeing failures from pthread_cond_signal() if both processes don't perform the mmap() calls in exactly the same order, or if one process mmaps in some other piece of shared memory before it mmaps the sync memory. This example code is about as short as I can make it: #include <sys/types.h> #include <sys/stat.h> #include <sys/mman.h> #include <sys/file.h> #include <stdlib.h> #include <pthread.h> #include <errno.h> #include <iostream> #include <string> using namespace std; static const string shm_name_sync("/sync"); static const string shm_name_data("/data"); struct shared_memory_sync { pthread_mutex_t mutex; pthread_cond_t condition; }; struct shared_memory_data { int a; int b; }; //Create 2 shared memory objects // - sync contains 2 shared synchronisation objects (mutex and condition) // - data not important void create() { // Create and map 'sync' shared memory int fd_sync = shm_open(shm_name_sync.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_sync, sizeof(shared_memory_sync)); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // init the cond and mutex pthread_condattr_t cond_attr; pthread_condattr_init(&cond_attr); pthread_condattr_setpshared(&cond_attr, PTHREAD_PROCESS_SHARED); pthread_cond_init(&(p_sync->condition), &cond_attr); pthread_condattr_destroy(&cond_attr); pthread_mutexattr_t m_attr; pthread_mutexattr_init(&m_attr); pthread_mutexattr_setpshared(&m_attr, PTHREAD_PROCESS_SHARED); pthread_mutex_init(&(p_sync->mutex), &m_attr); pthread_mutexattr_destroy(&m_attr); // Create the 'data' shared memory int fd_data = shm_open(shm_name_data.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_data, sizeof(shared_memory_data)); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); // Run the second process while it sleeps here. sleep(10); int res = pthread_cond_signal(&(p_sync->condition)); assert(res==0); // <--- !!!THIS ASSERT WILL FAIL ON LYNXOS!!! munmap(addr_sync, sizeof(shared_memory_sync)); shm_unlink(shm_name_sync.c_str()); munmap(addr_data, sizeof(shared_memory_data)); shm_unlink(shm_name_data.c_str()); } //Open the same 2 shared memory objects but in reverse order // - data // - sync void open() { sleep(2); int fd_data = shm_open(shm_name_data.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); int fd_sync = shm_open(shm_name_sync.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // Wait on the condvar pthread_mutex_lock(&(p_sync->mutex)); pthread_cond_wait(&(p_sync->condition), &(p_sync->mutex)); pthread_mutex_unlock(&(p_sync->mutex)); munmap(addr_sync, sizeof(shared_memory_sync)); munmap(addr_data, sizeof(shared_memory_data)); } int main(int argc, char** argv) { if(argc>1) { open(); } else { create(); } return (0); } Run this program with no args, then another copy with args, and the first one will fail at the assert checking the pthread_cond_signal(). But change the open() function to mmap() the "/sync" memory first and it will all work fine. This seems like a major bug in LynxOS but LynuxWorks claim that using mutex and condition variable in this way is not covered by the POSIX standard, so they are not interested. Can anyone determine if this code does violate POSIX? Or does anyone have any convincing documentation that it is POSIX compliant?

    Read the article

  • Paypal development. encrypt transactions. php p12

    - by ninchen
    when i take a look at the paypal documentation, they say "Note that the PayPal SDK for PHP does not require SSL encryption". https://developer.paypal.com/docs/classic/api/apiCredentials/#encrypting-your-certificate Is the statement of this phrase, that i don't have to create a p12 certificate when working with php, but use the public_key.pem and paypal_public_key.pem? If yes: Is it secure enough to create the encrypted form input elements without p12 certificate? If no: What do they mean? :-) Before this question came up, i've tested this little programm. http://www.softarea51.com/blog/how-to-integrate-your-custom-shopping-cart-with-paypal-website-payments-standard-using-php/ There is a config file paypal-wps-config.inc.php where i can define the paths to my certificates. // tryed to use // 'paypal_cert.p12 '; $config['private_key_path'] = '/home/folder/.cert/pp/prvkey.pem'; // must match the one you set when you created the private key $config['private_key_password'] = ''; //'my_password'; When i try to use the p12 certificate, openssl_error_string() returns "Could not sign data: error:0906D06C:PEM routines:PEM_read_bio:no start line openssl_pkcs7_sign When i instead use the prvkey.pem without password all works fine. Here is the function, which signs and encrypt the data. function signAndEncrypt($dataStr_, $ewpCertPath_, $ewpPrivateKeyPath_, $ewpPrivateKeyPwd_, $paypalCertPath_) { $dataStrFile = realpath(tempnam('/tmp', 'pp_')); $fd = fopen($dataStrFile, 'w'); if(!$fd) { $error = "Could not open temporary file $dataStrFile."; return array("status" => false, "error_msg" => $error, "error_no" => 0); } fwrite($fd, $dataStr_); fclose($fd); $signedDataFile = realpath(tempnam('/tmp', 'pp_')); **// here the error came from** if(!@openssl_pkcs7_sign( $dataStrFile, $signedDataFile, "file://$ewpCertPath_", array("file://$ewpPrivateKeyPath_", $ewpPrivateKeyPwd_), array(), PKCS7_BINARY)) { unlink($dataStrFile); unlink($signedDataFile); $error = "Could not sign data: ".openssl_error_string(); return array("status" => false, "error_msg" => $error, "error_no" => 0); } unlink($dataStrFile); $signedData = file_get_contents($signedDataFile); $signedDataArray = explode("\n\n", $signedData); $signedData = $signedDataArray[1]; $signedData = base64_decode($signedData); unlink($signedDataFile); $decodedSignedDataFile = realpath(tempnam('/tmp', 'pp_')); $fd = fopen($decodedSignedDataFile, 'w'); if(!$fd) { $error = "Could not open temporary file $decodedSignedDataFile."; return array("status" => false, "error_msg" => $error, "error_no" => 0); } fwrite($fd, $signedData); fclose($fd); $encryptedDataFile = realpath(tempnam('/tmp', 'pp_')); if(!@openssl_pkcs7_encrypt( $decodedSignedDataFile, $encryptedDataFile, file_get_contents($paypalCertPath_), array(), PKCS7_BINARY)) { unlink($decodedSignedDataFile); unlink($encryptedDataFile); $error = "Could not encrypt data: ".openssl_error_string(); return array("status" => false, "error_msg" => $error, "error_no" => 0); } unlink($decodedSignedDataFile); $encryptedData = file_get_contents($encryptedDataFile); if(!$encryptedData) { $error = "Encryption and signature of data failed."; return array("status" => false, "error_msg" => $error, "error_no" => 0); } unlink($encryptedDataFile); $encryptedDataArray = explode("\n\n", $encryptedData); $encryptedData = trim(str_replace("\n", '', $encryptedDataArray[1])); return array("status" => true, "encryptedData" => $encryptedData); } // signAndEncrypt } // PPCrypto The main questions: 1. Is it possible to use p12 cert with php, or is it secure enough to work without it? 2. Why i become an error when using openssl_pkcs7_sign Please help. Greetings ninchen

    Read the article

  • Creating a file upload template in Doctrine ORM

    - by balupton
    Hey all. I'm using Doctrine 1.2 as my ORM for a Zend Framework Project. I have defined the following Model for a File. File: columns: id: primary: true type: integer(4) unsigned: true code: type: string(255) unique: true notblank: true path: type: string(255) notblank: true size: type: integer(4) type: type: enum values: [file,document,image,video,audio,web,application,archive] default: unknown notnull: true mimetype: type: string(20) notnull: true width: type: integer(2) unsigned: true height: type: integer(2) unsigned: true Now here is the File Model php class (just skim through for now): <?php /** * File * * This class has been auto-generated by the Doctrine ORM Framework * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6365 2009-09-15 18:22:38Z jwage $ */ class File extends BaseFile { public function setUp ( ) { $this->hasMutator('file', 'setFile'); parent::setUp(); } public function setFile ( $file ) { global $Application; // Configuration $config = array(); $config['bal'] = $Application->getOption('bal'); // Check the file if ( !empty($file['error']) ) { $error = $file['error']; switch ( $file['error'] ) { case UPLOAD_ERR_INI_SIZE : $error = 'ini_size'; break; case UPLOAD_ERR_FORM_SIZE : $error = 'form_size'; break; case UPLOAD_ERR_PARTIAL : $error = 'partial'; break; case UPLOAD_ERR_NO_FILE : $error = 'no_file'; break; case UPLOAD_ERR_NO_TMP_DIR : $error = 'no_tmp_dir'; break; case UPLOAD_ERR_CANT_WRITE : $error = 'cant_write'; break; default : $error = 'unknown'; break; } throw new Doctrine_Exception('error-application-file-' . $error); return false; } if ( empty($file['tmp_name']) || !is_uploaded_file($file['tmp_name']) ) { throw new Doctrine_Exception('error-application-file-invalid'); return false; } // Prepare config $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; // Prepare file $filename = $file['name']; $file_old_path = $file['tmp_name']; $file_new_path = $file_upload_path . $filename; $exist_attempt = 0; while ( file_exists($file_new_path) ) { // File already exists // Pump exist attempts ++$exist_attempt; // Add the attempt to the end of the file $file_new_path = $file_upload_path . get_filename($filename,false) . $exist_attempt . get_extension($filename); } // Move file $success = move_uploaded_file($file_old_path, $file_new_path); if ( !$success ) { throw new Doctrine_Exception('Unable to upload the file.'); return false; } // Secure $file_path = realpath($file_new_path); $file_size = filesize($file_path); $file_mimetype = get_mime_type($file_path); $file_type = get_filetype($file_path); // Apply $this->path = $file_path; $this->size = $file_size; $this->mimetype = $file_mimetype; $this->type = $file_type; // Apply: Image if ( $file_type === 'image' ) { $image_dimensions = image_dimensions($file_path); if ( !empty($image_dimensions) ) { // It is not a image we can modify $this->width = 0; $this->height = 0; } else { $this->width = $image_dimensions['width']; $this->height = $image_dimensions['height']; } } // Done return true; } /** * Download the File * @return */ public function download ( ) { global $Application; // File path $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; $file_path = $file_upload_path . $this->file_path; // Output result and download become_file_download($file_path, null, null); die(); } public function postDelete ( $Event ) { global $Application; // Prepare $Invoker = $Event->getInvoker(); // Configuration $config = array(); $config['bal'] = $Application->getOption('bal'); // File path $file_upload_path = realpath($config['bal']['files']['upload_path']) . DIRECTORY_SEPARATOR; $file_path = $file_upload_path . $this->file_path; // Delete the file unlink($file_path); // Done return true; } } What I am hoping to accomplish is so that the above custom functionality within my model file can be turned into a validator, template, or something along the lines. So hopefully I can do something like: File: actAs: BalFile: columns: id: primary: true type: integer(4) unsigned: true code: type: string(255) unique: true notblank: true path: type: string(255) notblank: true size: type: integer(4) type: type: enum values: [file,document,image,video,audio,web,application,archive] default: unknown notnull: true mimetype: type: string(20) notnull: true width: type: integer(2) unsigned: true height: type: integer(2) unsigned: true I'm hoping for a validator so that say if I do $File->setFile($_FILE['uploaded_file']); It will provide a validation error, except in all the doctrine documentation it has little on custom validators, especially in the contect of "virtual" fields. So in summary, my question is: How earth can I go about making a template/extension to porting this functionality? I have tried before with templates but always gave up after a day :/ If you could take the time to port the above I would greatly appreciate it.

    Read the article

  • Creating ActionEvent object for CustomButton in Java

    - by Crystal
    For a hw assignment, we were supposed to create a custom button to get familiar with swing and responding to events. We were also to make this button an event source which confuses me. I have an ArrayList to keep track of listeners that would register to listen to my CustomButton. What I am getting confused on is how to notify the listeners. My teacher hinted at having a notify and overriding actionPerformed which I tried doing, but then I wasn't sure how to create an ActionEvent object looking at the constructor documentation. The source, id, string all confuses me. Any help would be appreciated. Thanks! code: import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.util.List; import java.util.ArrayList; public class CustomButton { public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { CustomButtonFrame frame = new CustomButtonFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setVisible(true); } }); } public void addActionListener(ActionListener al) { listenerList.add(al); } public void removeActionListener(ActionListener al) { listenerList.remove(al); } public void actionPerformed(ActionEvent e) { System.out.println("Button Clicked!"); } private void notifyListeners() { ActionEvent event = new ActionEvent(CONFUSED HERE!!!!; for (ActionListener action : listenerList) { action.actionPerfomed(event); } } List<ActionListener> listenerList = new ArrayList<ActionListener>(); } class CustomButtonFrame extends JFrame { // constructor for CustomButtonFrame public CustomButtonFrame() { setTitle("Custom Button"); CustomButtonSetup buttonSetup = new CustomButtonSetup(); this.add(buttonSetup); this.pack(); } } class CustomButtonSetup extends JComponent { public CustomButtonSetup() { ButtonAction buttonClicked = new ButtonAction(); this.addMouseListener(buttonClicked); } // because frame includes borders and insets, use this method public Dimension getPreferredSize() { return new Dimension(200, 200); } public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D) g; // first triangle coords int x[] = new int[TRIANGLE_SIDES]; int y[] = new int[TRIANGLE_SIDES]; x[0] = 0; y[0] = 0; x[1] = 200; y[1] = 0; x[2] = 0; y[2] = 200; Polygon firstTriangle = new Polygon(x, y, TRIANGLE_SIDES); // second triangle coords x[0] = 0; y[0] = 200; x[1] = 200; y[1] = 200; x[2] = 200; y[2] = 0; Polygon secondTriangle = new Polygon(x, y, TRIANGLE_SIDES); g2.drawPolygon(firstTriangle); g2.setColor(firstColor); g2.fillPolygon(firstTriangle); g2.drawPolygon(secondTriangle); g2.setColor(secondColor); g2.fillPolygon(secondTriangle); // draw rectangle 10 pixels off border int s1[] = new int[RECT_SIDES]; int s2[] = new int[RECT_SIDES]; s1[0] = 5; s2[0] = 5; s1[1] = 195; s2[1] = 5; s1[2] = 195; s2[2] = 195; s1[3] = 5; s2[3] = 195; Polygon rectangle = new Polygon(s1, s2, RECT_SIDES); g2.drawPolygon(rectangle); g2.setColor(thirdColor); g2.fillPolygon(rectangle); } private class ButtonAction implements MouseListener { public void mousePressed(MouseEvent e) { System.out.println("Click!"); firstColor = Color.GRAY; secondColor = Color.WHITE; repaint(); } public void mouseReleased(MouseEvent e) { System.out.println("Released!"); firstColor = Color.WHITE; secondColor = Color.GRAY; repaint(); } public void mouseEntered(MouseEvent e) {} public void mouseExited(MouseEvent e) {} public void mouseClicked(MouseEvent e) {} } public static final int TRIANGLE_SIDES = 3; public static final int RECT_SIDES = 4; private Color firstColor = Color.WHITE; private Color secondColor = Color.DARK_GRAY; private Color thirdColor = Color.LIGHT_GRAY; }

    Read the article

  • How can I generate XML application configuration using Zend_Tool?

    - by wimvds
    When creating a new project using zf create project myproject it will create a default project layout with an application.ini in the configs folder. Where can I change these default settings so that it generates (and uses) an XML file (application.xml)? I've looked at the documentation for Zend_Tool (http://framework.zend.com/manual/en/zend.tool.html), but there seems to be no information on how to do this. And suppose you'd like to use a different default folder layout (ie. use htdocs instead of public as your document root), is there a way to specify this as well? Any pointers to relevant information (btw I've looked at the Quickstart, nothing relevant is mentioned there unless I'm overlooking it)? edit I already tried creating a profile (stored in .zf/project/profiles), and used that to create a project (using zf create project myproject myprofile) but that doesn't change anything, even though the .zfproject.xml file in the root of the new project does contain the <applicationConfigFile type="xml"/> setting... The new project contains this (as you can see, it's just the default settings, only the type of applicationConfigFile has been changed) : <?xml version="1.0"?> <projectProfile type="default" version="1.10"> <projectDirectory> <projectProfileFile filesystemName=".zfproject.xml"/> <applicationDirectory classNamePrefix="Application_"> <apisDirectory enabled="false"/> <configsDirectory> <applicationConfigFile type="xml"/> </configsDirectory> <controllersDirectory> <controllerFile controllerName="Index"> <actionMethod actionName="index"/> </controllerFile> <controllerFile controllerName="Error"/> </controllersDirectory> <formsDirectory enabled="false"/> <layoutsDirectory enabled="false"/> <modelsDirectory/> <modulesDirectory enabled="false"/> <viewsDirectory> <viewScriptsDirectory> <viewControllerScriptsDirectory forControllerName="Index"> <viewScriptFile forActionName="index"/> </viewControllerScriptsDirectory> <viewControllerScriptsDirectory forControllerName="Error"> <viewScriptFile forActionName="error"/> </viewControllerScriptsDirectory> </viewScriptsDirectory> <viewHelpersDirectory/> <viewFiltersDirectory enabled="false"/> </viewsDirectory> <bootstrapFile filesystemName="Bootstrap.php"/> </applicationDirectory> <dataDirectory enabled="false"> <cacheDirectory enabled="false"/> <searchIndexesDirectory enabled="false"/> <localesDirectory enabled="false"/> <logsDirectory enabled="false"/> <sessionsDirectory enabled="false"/> <uploadsDirectory enabled="false"/> </dataDirectory> <docsDirectory> <file filesystemName="README.txt"/> </docsDirectory> <libraryDirectory> <zfStandardLibraryDirectory enabled="false"/> </libraryDirectory> <publicDirectory> <publicStylesheetsDirectory enabled="false"/> <publicScriptsDirectory enabled="false"/> <publicImagesDirectory enabled="false"/> <publicIndexFile filesystemName="index.php"/> <htaccessFile filesystemName=".htaccess"/> </publicDirectory> <projectProvidersDirectory enabled="false"/> <temporaryDirectory enabled="false"/> <testsDirectory> <testPHPUnitConfigFile filesystemName="phpunit.xml"/> <testApplicationDirectory> <testApplicationBootstrapFile filesystemName="bootstrap.php"/> </testApplicationDirectory> <testLibraryDirectory> <testLibraryBootstrapFile filesystemName="bootstrap.php"/> </testLibraryDirectory> </testsDirectory> </projectDirectory> </projectProfile>

    Read the article

  • Having trouble comparing a range of dates entered in a form with records in a mySQL database

    - by Andrew Fox
    I have a table called schedule and a column called Date where the column type is date. In that column I have a range of dates, which is currently from 2012-11-01 to 2012-11-30. I have a small form where the user can enter a range of dates (input names from and to) and I want to be able to compare the range of dates with the dates currently in the database. This is what I have: //////////////////////////////////////////////////////// //////First set the date range that we want to use////// //////////////////////////////////////////////////////// if(isset($_POST['from']) && ($_POST['from'] != NULL)) { $startDate = $_POST['from']; } else { //Default date is Today $startDate = date("Y-m-d"); } if(isset($_POST['to']) && ($_POST['to'] != NULL)) { $endDate = $_POST['to']; } else { //Default day is one month from today $endDate = date("Y-m-d", strtotime("+1 month")); } ////////////////////////////////////////////////////////////////////////////////////// //////Next calculate the total amount of days selected above to use as a limiter////// ////////////////////////////////////////////////////////////////////////////////////// $dayStart = strtotime($startDate); $dayEnd = strtotime($endDate); $total_days = abs($dayEnd - $dayStart) / 86400 +1; echo "Start Date: " . $startDate . "<br>End Date: " . $endDate . "<br>"; echo "Day Start: " . $dayStart . "<br>Day End: " . $dayEnd . "<br>"; echo "Total Days: " . $total_days . "<br>"; //////////////////////////////////////////////////////////////////////////////////// //////Then we're going to see if the dates selected are in the schedule table////// //////////////////////////////////////////////////////////////////////////////////// //Select all of the dates currently in the schedule table between the range selected. $sql = ("SELECT Date FROM schedule WHERE Date BETWEEN '$startDate' AND '$endDate' LIMIT $total_days"); //Run a check on the query to make sure it worked. If it failed then print the error. if(!$result_date_query = $mysqli->query($sql)) { die('There was an error getting the dates from the schedule table [' . $mysqli->error . ']'); } //Set the dates to an array for future use. // $current_dates = $result_date_query->fetch_assoc(); //Loop through the results while a result is being returned. while($row = $result_date_query->fetch_assoc()) { echo "Row: " . $row['Date'] . "<br>"; echo "Start day: " . date('Y-m-d', $dayStart) . "<br>"; //Set this loop to add 1 day to the Start Date until it reaches the End Date for($i = $dayStart; $i <= $dayEnd; $i = strtotime('+1 day', $i)) { $date = date('Y-m-d',$i); echo "Loop Start day: " . date('Y-m-d', $dayStart) . "<br>"; //Run a check to see if any of the dates selected are in the schedule table. if($row['Date'] != $date) { echo "Current Date: " . $row['Date'] . "<br>"; echo "Date: " . $date . "<br>"; echo "It appears as though you've selected some dates that are not in the schedule database.<br>Please correct the issue and try again."; return; } } } //Free the result so something else can use it. $result_date_query->free(); As you can see I've added in some echo statements so I can see what is being produced. From what I can see it looks like my $row['Date'] is not incrementing and staying at the same date. I originally had it set to a variable (currently commented out) but I thought that could be causing problems. I have created the table with dates ranging from 2012-11-01 to 2012-11-15 for testing and entered all of this php code onto phpfiddle.org but I can't get the username provided to connect. Here is the link: PHP Fiddle I'll be reading through the documentation to try and figure out the user connection problem in the meantime, I would really appreciate any direction or advice you can give me.

    Read the article

  • Backbone.js Model validation fails to prevent Model from saving

    - by Benjen
    I have defined a validate method for a Backbone.js Model. The problem is that even if validation fails (i.e. the Model.validate method returns a value) the post/put request is still sent to the server. This contradicts what is explained in the Backbone.js documentation. I cannot understand what I am doing wrong. The following is the Model definition: /** * Model - Contact */ var Contact = Backbone.Model.extend({ urlRoot: '/contacts.json', idAttribute: '_id', defaults: function() { return { surname: '', given_name: '', org: '', phone: new Array(), email: new Array(), address: new Array({ street: '', district: '', city: '', country: '', postcode: '' }) }; } validate: function(attributes) { if (typeof attributes.validationDisabled === 'undefined') { var errors = new Array(); // Validate surname. if (_.isEmpty(attributes.surname) === true) { errors.push({ type: 'form', attribute: 'surname', message: 'Please enter a surname.' }); } // Validate emails. if (_.isEmpty(attributes.email) === false) { var emailRegex = /^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,6}$/i; // Stores indexes of email values which fail validation. var emailIndex = new Array(); _.each(attributes.email, function(email, index) { if (emailRegex.test(email.value) === false) { emailIndex.push(index); } }); // Create error message. if (emailIndex.length > 0) { errors.push({ type: 'form', attribute: 'email', index: emailIndex, message: 'Please enter valid email address.' }); } } if (errors.length > 0) { console.log('Form validation failed.'); return errors; } } } }); Here is the View which calls the Model.save() method (see: method saveContact() below). Note that other methods belonging to this View have not been included below for reasons of brevity. /** * View - Edit contact form */ var EditContactFormView = Backbone.View.extend({ initialize: function() { _.bindAll(this, 'createDialog', 'formError', 'render', 'saveContact', 'updateContact'); // Add templates. this._editFormTemplate = _.template($('#edit-contact-form-tpl').html()); this._emailFieldTemplate = _.template($('#email-field-tpl').html()); this._phoneFieldTemplate = _.template($('#phone-field-tpl').html()); // Get URI of current page. this.currentPageUri = this.options.currentPageUri; // Create array to hold references to all subviews. this.subViews = new Array(); // Set options for new or existing contact. this.model = this.options.model; // Bind with Model validation error event. this.model.on('error', this.formError); this.render(); } /** * Deals with form validation errors */ formError: function(model, error) { console.log(error); }, saveContact: function(event) { var self = this; // Prevent submit event trigger from firing. event.preventDefault(); // Trigger form submit event. eventAggregator.trigger('submit:contactEditForm'); // Update model with form values. this.updateContact(); // Enable validation for Model. Done by unsetting validationDisabled // attribute. This setting was formerly applied to prevent validation // on Model.fetch() events. See this.model.validate(). this.model.unset('validationDisabled'); // Save contact to database. this.model.save(this.model.attributes, { success: function(model, response) { if (typeof response.flash !== 'undefined') { Messenger.trigger('new:messages', response.flash); } }, error: function(model, response) { console.log(response); throw error = new Error('Error occured while trying to save contact.'); } }, { wait: true }); }, /** * Extract form values and update Contact. */ updateContact: function() { this.model.set('surname', this.$('#surname-field').val()); this.model.set('given_name', this.$('#given-name-field').val()); this.model.set('org', this.$('#org-field').val()); // Extract address form values. var address = new Array({ street: this.$('input[name="street"]').val(), district: this.$('input[name="district"]').val(), city: this.$('input[name="city"]').val(), country: this.$('input[name="country"]').val(), postcode: this.$('input[name="postcode"]').val() }); this.model.set('address', address); } });

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Loading jQuery Consistently in a .NET Web App

    - by Rick Strahl
    One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC. Loading jQuery Properly From CDN Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario. Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage. Here's how I use jQuery CDN plus a fallback link on my WebLog for example: <!DOCTYPE HTML> <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> <script> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript " + "src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E")); </script> <title>Rick Strahl's Web Log</title> ... </head>   You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js". Important: Use the proper Protocol/Scheme for  for CDN Urls [updated based on comments] If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links. BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-) Version Numbers When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache. On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages. Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about. Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from. Loading Scripts via Server Code Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes. In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal… In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template. It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics. Providing Resources in ControlResources In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter). The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading: A Script LoadMode (CDN, Resource, or script url) A default CDN Url A fallback url They are  static properties in the ControlResources class: public class ControlResources { /// <summary> /// Determines what location jQuery is loaded from /// </summary> public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"; /// <summary> /// jQuery UI fallback Url if CDN is unavailable or WebResource is used /// Note: The file needs to exist and hold the minimized version of jQuery ui /// </summary> public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js"; } These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application: protected void Application_Start(object sender, EventArgs e) { // Force jQuery to be loaded off Google Content Network ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; // Allow overriding of the Cdn url ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"; // Route to our own internal handler App.OnApplicationStart(); } With these basic settings in place you can then embed expressions into a page easily. In WebForms use: <!DOCTYPE html> <html> <head runat="server"> <%= ControlResources.jQueryLink() %> <script src="scripts/ww.jquery.min.js"></script> </head> In Razor use: <!DOCTYPE html> <html> <head> @Html.Raw(ControlResources.jQueryLink()) <script src="scripts/ww.jquery.min.js"></script> </head> Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text. Both the WebForms and Razor output produce: <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head> which produces the desired effect for both CDN load and fallback URL. The implementation of jQueryLink is pretty basic of course: /// <summary> /// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings /// of this class. Default load is by CDN plus WebResource fallback /// </summary> /// <param name="url"> /// An optional explicit URL to load jQuery from. Url is resolved. /// When specified no fallback is applied /// </param> /// <returns>full script tag and fallback script for jQuery to load</returns> public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null) { string jQueryUrl = string.Empty; string fallbackScript = string.Empty; if (jQueryLoadMode == JQueryLoadModes.Default) jQueryLoadMode = ControlResources.jQueryLoadMode; if (!string.IsNullOrEmpty(url)) jQueryUrl = WebUtils.ResolveUrl(url); else if (jQueryLoadMode == JQueryLoadModes.WebResource) { Page page = new Page(); jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript, WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; } There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine). You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s). I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately. I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation. FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset. Resources Google CDN for jQuery Full ControlResources Source Code ControlResource Documentation Westwind.Web NuGet This method is part of the Westwind.Web library of the West Wind Web Toolkit or you can grab the Web library from NuGet and add to your Visual Studio project. This package includes a host of Web related utilities and script support features. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >