Search Results

Search found 512 results on 21 pages for 'sean b durkin'.

Page 20/21 | < Previous Page | 16 17 18 19 20 21  | Next Page >

  • Parse.com REST API in Java (NOT Android)

    - by Orange Peel
    I am trying to use the Parse.com REST API in Java. I have gone through the 4 solutions given here https://parse.com/docs/api_libraries and have selected Parse4J. After importing the source into Netbeans, along with importing the following libraries: org.slf4j:slf4j-api:jar:1.6.1 org.apache.httpcomponents:httpclient:jar:4.3.2 org.apache.httpcomponents:httpcore:jar:4.3.1 org.json:json:jar:20131018 commons-codec:commons-codec:jar:1.9 junit:junit:jar:4.11 ch.qos.logback:logback-classic:jar:0.9.28 ch.qos.logback:logback-core:jar:0.9.28 I ran the example code from https://github.com/thiagolocatelli/parse4j Parse.initialize(APP_ID, APP_REST_API_ID); // I replaced these with mine ParseObject gameScore = new ParseObject("GameScore"); gameScore.put("score", 1337); gameScore.put("playerName", "Sean Plott"); gameScore.put("cheatMode", false); gameScore.save(); And I got that it was missing org.apache.commons.logging, so I downloaded that and imported it. Then I ran the code again and got Exception in thread "main" java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;ILjava/lang/String;Ljava/lang/Throwable;)V at org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwareLog.java:120) at org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:122) at org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:131) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:85) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) at org.parse4j.command.ParseCommand.perform(ParseCommand.java:44) at org.parse4j.ParseObject.save(ParseObject.java:450) I could probably fix this with another import, but I suppose then something else would pop up. I tried the other libraries with similar results, missing a bunch of libraries. Has anyone actually used REST API successfully in Java? If so, I would be grateful if you shared which library/s you used and anything else required to get it going successfully. I am using Netbeans. Thanks.

    Read the article

  • Remove a hover class from a checkbox once clicked

    - by seangeraghty
    I am trying to remove a hover class applied to a checkbox via CSS once the box has been clicked. Does anyone know how to do this? JSFiddle http://jsfiddle.net/sa9fe/ The checkbox code is: <div> <input type="checkbox" id="checkbox-1-1" class="regular-checkbox flaticon-boxing3" /> <input type="checkbox" id="checkbox-1-2" class="regular-checkbox" /> <input type="checkbox" id="checkbox-1-3" class="regular-checkbox" /> <input type="checkbox" id="checkbox-1-4" class="regular-checkbox" /> </div> And the CSS for the checkbox are as follows: .regular-checkbox { vertical-align: middle; text-align: center; margin-left: auto; margin-right: auto; align: center; color: #39c; width: 140px; height: 140px; -webkit-border-radius: 6px; -moz-border-radius: 6px; border-radius: 6px; background-color: #fff; border: solid 1px #ccc; -webkit-appearance: none; background-color: #fff; display: inline-block; position: relative;} .regular-checkbox:checked { background-color: #39c; color: #fff !important;} .regular-checkbox:hover { background-color: #f0f7f9;} .regular-checkbox:checked:after { color: #fff; position: absolute; top: 0px; left: 3px; color: #99a1a7; } So any suggestions? Also does anyone know how to change the highlight because at the moment it seems to highlight the edges of the box at a border radius of 3px whereas the boxes I am using are 6px. Thanks Sean (newbie to coding)

    Read the article

  • Problem with IE and Jquery qTip plugin

    - by user272899
    I am having problems with the Jquery qtip plugin. It works fine in Firefox (see here http://movieo.no-ip.org/ hover over the first image). But doesn't work in IE. This is the code: $('.moviebox').each(function() { $(this).qtip({ content: $(this).children('.info'), show: 'mouseover', hide: 'mouseout', style: { name: 'light' }, position: { corner: { target: 'rightbottom', tooltip: 'bottomleft' } } }); }); And the html <!--start moviebox--> <div class="moviebox"> <a href="#"> <img src="http://1.bp.blogspot.com/_mySxtRcQIag/S6deHcoChaI/AAAAAAAAObc/Z1Xg3aB_wkU/s200/rising_sun.jpg" /> </a> <!--start infobox--> <div class="info"> <span>Rising Sun (2006)</span> <div class="description"><strong>Description:</strong><br /> test test test test test test test test test test test test test test test test</div> <img src="http://1.bp.blogspot.com/_mySxtRcQIag/S6deHcoChaI/AAAAAAAAObc/Z1Xg3aB_wkU/s200/rising_sun.jpg" /> <div class="cast"><strong>Cast:</strong><br /> Sean connery</div> <div class="rating"><strong>Rating:</strong><br />5stars</div> </div> <!--end infobox--> </div> <!--end moviebox--> Why wouldn't that work in IE????? Beats me. Checkout movieo.no-ip.org for the whole source

    Read the article

  • Visual Studio 2008 / ASP.NET 3.5 / C# -- issues with intellisense, references, and builds

    - by goober
    Hey all, Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give! Thanks, Sean

    Read the article

  • XML Schema: Can I make some of an attribute's values be required but still allow other values?

    - by scrotty
    (Note: I cannot change structure of the XML I receive, I am only able to change how I validate it.) Let's say I can get XML like this: <Address Field="Street" Value="123 Main"/> <Address Field="StreetPartTwo" Value="Unit B"/> <Address Field="State" Value="CO"/> <Address Field="Zip" Value="80020"/> <Address Field="SomeOtherCrazyValue" Value="Foo"/> I need to create an XSD schema that validates that "Street", "State" and "Zip" must be present. But I don't care if "StreetPartTwo" or "SomeOTherCrazyValue" is present. If I knew that only the three I care about could be included, I could do this: <xs:element name="Address" type="addressType" maxOccurs="unbounded" minOccurs="3"/> <xs:complexType name="addressType"> <xs:attribute name="Field" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Street"/> <xs:enumeration value="State"/> <xs:enumeration value="Zip"/> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> But this won't work with my case because I may also receive those other Address elements (that also have "Field" attributes) that I don't care about. Any ideas how I can ensure the stuff I care about is present but let the other stuff in too? TIA! Sean

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Atmospheric Scattering

    - by Lawrence Kok
    I'm trying to implement atmospheric scattering based on Sean O`Neil algorithm that was published in GPU Gems 2. But I have some trouble getting the shader to work. My latest attempts resulted in: http://img253.imageshack.us/g/scattering01.png/ I've downloaded sample code of O`Neil from: http://http.download.nvidia.com/developer/GPU_Gems_2/CD/Index.html. Made minor adjustments to the shader 'SkyFromAtmosphere' that would allow it to run in AMD RenderMonkey. In the images it is see-able a form of banding occurs, getting an blueish tone. However it is only applied to one half of the sphere, the other half is completely black. Also the banding appears to occur at Zenith instead of Horizon, and for a reason I managed to get pac-man shape. I would appreciate it if somebody could show me what I'm doing wrong. Vertex Shader: uniform mat4 matView; uniform vec4 view_position; uniform vec3 v3LightPos; const int nSamples = 3; const float fSamples = 3.0; const vec3 Wavelength = vec3(0.650,0.570,0.475); const vec3 v3InvWavelength = 1.0f / vec3( Wavelength.x * Wavelength.x * Wavelength.x * Wavelength.x, Wavelength.y * Wavelength.y * Wavelength.y * Wavelength.y, Wavelength.z * Wavelength.z * Wavelength.z * Wavelength.z); const float fInnerRadius = 10; const float fOuterRadius = fInnerRadius * 1.025; const float fInnerRadius2 = fInnerRadius * fInnerRadius; const float fOuterRadius2 = fOuterRadius * fOuterRadius; const float fScale = 1.0 / (fOuterRadius - fInnerRadius); const float fScaleDepth = 0.25; const float fScaleOverScaleDepth = fScale / fScaleDepth; const vec3 v3CameraPos = vec3(0.0, fInnerRadius * 1.015, 0.0); const float fCameraHeight = length(v3CameraPos); const float fCameraHeight2 = fCameraHeight * fCameraHeight; const float fm_ESun = 150.0; const float fm_Kr = 0.0025; const float fm_Km = 0.0010; const float fKrESun = fm_Kr * fm_ESun; const float fKmESun = fm_Km * fm_ESun; const float fKr4PI = fm_Kr * 4 * 3.141592653; const float fKm4PI = fm_Km * 4 * 3.141592653; varying vec3 v3Direction; varying vec4 c0, c1; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main( void ) { // Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere) vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Pos = normalize(gl_Vertex.xyz) * fOuterRadius; vec3 v3Ray = v3CameraPos - v3Pos; float fFar = length(v3Ray); v3Ray = normalize(v3Ray); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(normalize(v3LightPos), v3SamplePoint) / fHeight; float fCameraAngle = dot(normalize(v3Ray), v3SamplePoint) / fHeight; float fScatter = (-fStartOffset + fDepth*( scale(fLightAngle) - scale(fCameraAngle)))/* 0.25f*/; vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } // Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader vec4 newPos = vec4( (gl_Vertex.xyz + view_position.xyz), 1.0); gl_Position = gl_ModelViewProjectionMatrix * vec4(newPos.xyz, 1.0); gl_Position.z = gl_Position.w * 0.99999; c1 = vec4(v3FrontColor * fKmESun, 1.0); c0 = vec4(v3FrontColor * (v3InvWavelength * fKrESun), 1.0); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: uniform vec3 v3LightPos; varying vec3 v3Direction; varying vec4 c0; varying vec4 c1; const float g =-0.90f; const float g2 = g * g; const float Exposure =2; void main(void){ float fCos = dot(normalize(v3LightPos), v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); gl_FragColor = c0 + fMiePhase * c1; gl_FragColor.a = 1.0; }

    Read the article

  • GLSL Atmospheric Scattering Issue

    - by mtf1200
    I am attempting to use Sean O'Neil's shaders to accomplish atmospheric scattering. For now I am just using SkyFromSpace and GroundFromSpace. The atmosphere works fine but the planet itself is just a giant dark sphere with a white blotch that follows the camera. I think the problem might rest in the "v3Attenuation" variable as when this is removed the sphere is show (albeit without scattering). Here is the vertex shader. Thanks for the time! uniform mat4 g_WorldViewProjectionMatrix; uniform mat4 g_WorldMatrix; uniform vec3 m_v3CameraPos; // The camera's current position uniform vec3 m_v3LightPos; // The direction vector to the light source uniform vec3 m_v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels uniform float m_fCameraHeight; // The camera's current height uniform float m_fCameraHeight2; // fCameraHeight^2 uniform float m_fOuterRadius; // The outer (atmosphere) radius uniform float m_fOuterRadius2; // fOuterRadius^2 uniform float m_fInnerRadius; // The inner (planetary) radius uniform float m_fInnerRadius2; // fInnerRadius^2 uniform float m_fKrESun; // Kr * ESun uniform float m_fKmESun; // Km * ESun uniform float m_fKr4PI; // Kr * 4 * PI uniform float m_fKm4PI; // Km * 4 * PI uniform float m_fScale; // 1 / (fOuterRadius - fInnerRadius) uniform float m_fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) uniform float m_fScaleOverScaleDepth; // fScale / fScaleDepth attribute vec4 inPosition; vec3 v3ELightPos = vec3(g_WorldMatrix * vec4(m_v3LightPos, 1.0)); vec3 v3ECameraPos= vec3(g_WorldMatrix * vec4(m_v3CameraPos, 1.0)); const int nSamples = 2; const float fSamples = 2.0; varying vec4 color; float scale(float fCos) { float x = 1.0 - fCos; return m_fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { gl_Position = g_WorldViewProjectionMatrix * inPosition; // Get the ray from the camera to the vertex and its length (which is the far point of the ray passing through the atmosphere) vec3 v3Pos = vec3(g_WorldMatrix * inPosition); vec3 v3Ray = v3Pos - v3ECameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the closest intersection of the ray with the outer atmosphere (which is the near point of the ray passing through the atmosphere) float B = 2.0 * dot(m_v3CameraPos, v3Ray); float C = m_fCameraHeight2 - m_fOuterRadius2; float fDet = max(0.0, B*B - 4.0 * C); float fNear = 0.5 * (-B - sqrt(fDet)); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = m_v3CameraPos + v3Ray * fNear; fFar -= fNear; float fDepth = exp((m_fInnerRadius - m_fOuterRadius) / m_fScaleDepth); float fCameraAngle = dot(-v3Ray, v3Pos) / fFar; float fLightAngle = dot(v3ELightPos, v3Pos) / fFar; float fCameraScale = scale(fCameraAngle); float fLightScale = scale(fLightAngle); float fCameraOffset = fDepth*fCameraScale; float fTemp = (fLightScale + fCameraScale); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * m_fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Attenuate; for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(m_fScaleOverScaleDepth * (m_fInnerRadius - fHeight)); float fScatter = fDepth*fTemp - fCameraOffset; v3Attenuate = exp(-fScatter * (m_v3InvWavelength * m_fKr4PI + m_fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } vec3 first = v3FrontColor * (m_v3InvWavelength * m_fKrESun + m_fKmESun); vec3 secondary = v3Attenuate; color = vec4((first + vec3(0.25,0.25,0.25) * secondary), 1.0); // ^^ that color is passed to the frag shader and is used as the gl_FragColor } Here is also an image of the problem image

    Read the article

  • Spring AOP pointcut that matches annotation on interface

    - by seanizer
    Hello, this is my first post here, so I apologize in advance for any stupidity on my side. I have a service class implemented in Java 6 / Spring 3 that needs an annotation to restrict access by role. I have defined an annotation called RequiredPermission that has as its value attribute one or more values from an enum called OperationType: public @interface RequiredPermission { /** * One or more {@link OperationType}s that map to the permissions required * to execute this method. * * @return */ OperationType[] value();} public enum OperationType { TYPE1, TYPE2; } package com.mycompany.myservice; public interface MyService{ @RequiredPermission(OperationType.TYPE1) void myMethod( MyParameterObject obj ); } package com.mycompany.myserviceimpl; public class MyServiceImpl implements MyService{ public myMethod( MyParameterObject obj ){ // do stuff here } } I also have the following aspect definition: /** * Security advice around methods that are annotated with * {@link RequiredPermission}. * * @param pjp * @param param * @param requiredPermission * @return * @throws Throwable */ @Around(value = "execution(public *" + " com.mycompany.myserviceimpl.*(..))" + " && args(param)" + // parameter object " && @annotation( requiredPermission )" // permission annotation , argNames = "param,requiredPermission") public Object processRequest(final ProceedingJoinPoint pjp, final MyParameterObject param, final RequiredPermission requiredPermission) throws Throwable { if(userService.userHasRoles(param.getUsername(),requiredPermission.values()){ return pjp.proceed(); }else{ throw new SorryButYouAreNotAllowedToDoThatException( param.getUsername(),requiredPermission.value()); } } The parameter object contains a user name and I want to look up the required role for the user before allowing access to the method. When I put the annotation on the method in MyServiceImpl, everything works just fine, the pointcut is matched and the aspect kicks in. However, I believe the annotation is part of the service contract and should be published with the interface in a separate API package. And obviously, I would not like to put the annotation on both service definition and implementation (DRY). I know there are cases in Spring AOP where aspects are triggered by annotations one interface methods (e.g. Transactional). Is there a special syntax here or is it just plain impossible out of the box. PS: I have not posted my spring config, as it seems to be working just fine. And no, those are neither my original class nor method names. Thanks in advance, Sean PPS: Actually, here is the relevant part of my spring config: <aop:aspectj-autoproxy proxy-target-class="false" /> <bean class="com.mycompany.aspect.MyAspect"> <property name="userService" ref="userService" /> </bean>

    Read the article

  • I am using a constructor for my array but why is it saying i need a return type?

    - by JB
    using System; using System.IO; using System.Data; using System.Text; using System.Drawing; using System.Data.OleDb; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Drawing.Printing; using System.Collections.Generic; namespace Eagle_Eye_Class_Finder { public class GetSchedule { public class IDnumber { public string Name { get; set; } public string ID { get; set; } public string year { get; set; } public string class1 { get; set; } public string class2 { get; set; } public string class3 { get; set; } public string class4 { get; set; } IDnumber[] IDnumbers = new IDnumber[3]; public GetSchedule() //here i get an error saying method must have a return type??? { IDnumbers[0] = new IDnumber() { Name = "Joshua Banks",ID = "900456317", year = "Senior", class1 = "TEET 4090", class2 = "TEET 3020", class3 = "TEET 3090", class4 = "TEET 4290" }; IDnumbers[1] = new IDnumber() { Name = "Sean Ward", ID = "900456318", year = "Junior", class1 = "ENGNR 4090", class2 = "ENGNR 3020", class3 = "ENGNR 3090", class4 = "ENGNR 4290" }; IDnumbers[2] = new IDnumber() { Name = "Terrell Johnson",ID = "900456319",year = "Sophomore", class1 = "BUS 4090", class2 = "BUS 3020", class3 = "BUS 3090", class4 = "BUS 4290" }; } static void ProcessNumber(IDnumber myNum) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); MessageBox.Show(myData); } public string GetDataFromNumber(string ID) { foreach (IDnumber idCandidateMatch in IDnumbers) { if (IDCandidateMatch.ID == ID) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDnumber.Name); myData.AppendLine(": "); myData.AppendLine(IDnumber.ID); myData.AppendLine(IDnumber.year); myData.AppendLine(IDnumber.class1); myData.AppendLine(IDnumber.class2); myData.AppendLine(IDnumber.class3); myData.AppendLine(IDnumber.class4); return myData; } } return ""; } } } }

    Read the article

  • No image displayed in Fancybox

    - by seansean11
    I am having trouble getting fancybox to display its corresponding images on a website that I'm building http://www.nomadicdrift.com/test/kaniwa#events . It's a custom one page portfolio theme that I set up on the WordPress platform. If you follow the link to the events section you will see 1 figure item in a gallery like position. I have this image set up to work as a fancybox gallery, but when you click on it, it opens up the fancybox interface but does not place a image in the frame, even though it should. So this is the problem...the images do not show up in fancybox and instead I see just the frame. Here is the html that I'm displaying: <figure> <a class="fancybox" data-fancybox-type="Fashion Show de Paris, France" href="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/NM2.jpg"> <img class="attachment-evento wp-post-image" width="231" height="191" title="NM" alt="NM" src="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/NM2-231x191.jpg"> </a> <figcaption> <a class="fancybox" data-fancybox-type="Fashion Show de Paris, France" href="http://www.nomadicdrift.com/test/kaniwa/wp-content/uploads/2012/07/CEDESAN.jpg"> </a> <h4>Fashion Show de Paris, France</h4> </figcaption> </figure> I'm not going to bore you with the PHP that I used to get that output because I think the problem lies elsewhere. ***I have tried to set up a simple standard fancybox gallery on the site also, but it gives me thes same problem, leading me to believe that the problem is deeper than the html markup. I have also successfully used this same markup for a one thumbnail fancybox gallery on another site. I thought maybe it was due to some conflict in the .js files I'm using. I tried uninstalling all of my plugins/addons (which aren't too many) one by one and still had the same result. I have all of my personal javascript in the functions.js file, which is where I call the fancybox plugin using the standard $("a.fancybox").fancybox();. I have installed this plugin before on other sites and have searched extensively for an answer, so any help is greatly appreciated. Thanks, Sean

    Read the article

  • SELF-SOLVED AutoHotkey Function GetMouseTaskbutton need to adapt for 64-bit OS

    - by auntyEEK
    SOLVED VIA SELF-HELP, HAIR-PULLING, AND TEETH-GRINDING. THANKS ANYWAY....... I'm using the GetMouseTaskbutton function from this thread on AHK forum. [http://www.autohotkey.com/forum/topic22763.html&highlight=getmousetaskbutton][1] ; Gets the index+1 of the taskbar button which the mouse is hovering over. ; Returns an empty string if the mouse is not over the taskbar's task toolbar. ; ; Some code and inspiration from Sean's TaskButton.ahk GetMouseTaskButton(ByRef hwnd) { MouseGetPos, x, y, win, ctl, 2 ; Check if hovering over taskbar. WinGetClass, cl, ahk_id %win% if (cl != "Shell_TrayWnd") return ; Check if hovering over a Toolbar. WinGetClass, cl, ahk_id %ctl% if (cl != "ToolbarWindow32") return ; Check if hovering over task-switching buttons (specific toolbar). hParent := DllCall("GetParent", "Uint", ctl) WinGetClass, cl, ahk_id %hParent% if (cl != "MSTaskSwWClass") return WinGet, pidTaskbar, PID, ahk_class Shell_TrayWnd hProc := DllCall("OpenProcess", "Uint", 0x38, "int", 0, "Uint", pidTaskbar) pRB := DllCall("VirtualAllocEx", "Uint", hProc , "Uint", 0, "Uint", 20, "Uint", 0x1000, "Uint", 0x4) VarSetCapacity(pt, 8, 0) NumPut(x, pt, 0, "int") NumPut(y, pt, 4, "int") ; Convert screen coords to toolbar-client-area coords. DllCall("ScreenToClient", "uint", ctl, "uint", &pt) ; Write POINT into explorer.exe. DllCall("WriteProcessMemory", "uint", hProc, "uint", pRB+0, "uint", &pt, "uint", 8, "uint", 0) ; SendMessage, 0x447,,,, ahk_id %ctl% ; TB_GETHOTITEM SendMessage, 0x445, 0, pRB,, ahk_id %ctl% ; TB_HITTEST btn_index := ErrorLevel ; Convert btn_index to a signed int, since result may be -1 if no 'hot' item. if btn_index 0x7FFFFFFF btn_index := -(~btn_index) - 1 if (btn_index > -1) { ; Get button info. SendMessage, 0x417, btn_index, pRB,, ahk_id %ctl% ; TB_GETBUTTON VarSetCapacity(btn, 20) DllCall("ReadProcessMemory", "Uint", hProc , "Uint", pRB, "Uint", &btn, "Uint", 20, "Uint", 0) state := NumGet(btn, 8, "UChar") ; fsState pdata := NumGet(btn, 12, "UInt") ; dwData ret := DllCall("ReadProcessMemory", "Uint", hProc , "Uint", pdata, "UintP", hwnd, "Uint", 4, "Uint", 0) } else hwnd = 0 DllCall("VirtualFreeEx", "Uint", hProc, "Uint", pRB, "Uint", 0, "Uint", 0x8000) DllCall("CloseHandle", "Uint", hProc) ; Negative values indicate seperator items. (abs(btn_index) is the index) return btn_index > -1 ? btn_index+1 : 0 } It identifies the owner of the hovered taskbar button. I'm using it in a routine to auto-activate window by hovering its taskbar button, and also a routine to close inactive window by middle-click on its taskbar button. Works great on my XP machine. The author had stated that the function does work in Vista, but it refuses to work for me in Vista 64-bit, so apparently it is only valid in 32-bit. And I am very new to AHK, and don't know how to adapt it. Unfortunately, my queries at the site sank without a trace. Does anyone have advice for me? I will be most grateful. Thanks.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • 2 Days of Share &amp; Point

    - by Mark Rackley
    Groovy man… SharePoint Saturday Ozarks is back for 2010, bigger and better than before. Join us for a far out time and learn more about SharePoint in one day than you could in a year from the man… Yes! SharePoint Saturday Ozarks is back! SharePoint Saturday Ozarks is the largest SharePoint conference in Arkansas, Southern Missouri, and the very north east tip of Oklahoma. Last year we had a great turn out with 20 speakers, 5 MVPs, and attendees coming from Arkansas, Texas, Oklahoma, Missouri, Kansas, Nebraska, Indiana, Ohio, Alabama, Michigan, and Washington. Hey Man… what’s SharePoint Saturday anyway? Sounds like a conspiracy man… Not to worry, SharePoint Saturday is not an arm of the government bent on mind control or any attempt what-so-ever to bring you down man. SharePoint Saturday is grass roots effort started by Michael Lotter (http://www.sharepointsaturday.org/pages/about.aspx). It is a FREE one day event where the best SharePoint speakers gather to present their love, hatred, and frustrations of SharePoint to those lucky individuals who attend. Lessons are learned, contacts are made, prizes are won, food is eaten, assorted beverages are consumed until wee hours of the morning. SharePoint Saturday started with just a few sporadic one day events here and there. However, over the past year SharePoint Saturday has exploded and it’s hard to find a weekend where there is NOT a SharePoint Saturday event happing in some corner of the globe. There are even occasions where there are two SharePoint Saturdays on the same day! Many people are pleasantly surprised at the caliber of speakers at these SharePoint Saturday events. For the most part, these speakers are more eloquent, practiced, and practical than those speakers you find at the major multi-day conferences. These guys aren’t even paid to speak.. they do it out of love man… SharePoint Saturday Ozarks 2009 Alumni We had a star studded cast last year with many returning this year! Just check out the fun that they had… John Ferringer – Admin rockstar… I can still sense the awesomeness   SharePoint poster children Mike Watson & Laura Rogers     Lori Gowin spreading the SharePoint Love Eric Shupps is a little bit country and a little bit rock and roll       Cathy Dew, Sean McDonough, and JD Wade relaxing between gigs Actually, you can see real photos from last year’s SharePoint Saturday ozarks here:  picasaweb.google.com/mrackley/SharePointSaturdayOzarks#    What’s new for SharePoint Saturday Ozarks 2010 SharePoint Saturday Ozarks 2010 will totally blow your mind man. We’re getting the band back to together with many returning speakers and few new faces. Joel Oleson will be speaking this year, maybe he’ll grace us with his song stylings. Sadly, once again, Andrew Connell will not be able to attend SharePoint Saturday Ozarks, however he did feel the need to show his support in his own way. Prizes this year currently include books, software, a Zune HD, and much more! Wait Man… You said 2 days? I thought it was a one day event? Correct you are my herbal smelling friend… SharePoint Saturday Ozarks 2010 will spread the love an additional day this year. The first day will be all about the SharePoint love, on day 2 we will be taking a leisurely float down the Buffalo National River for those interested in a truly unique experience (no banjos allowed please).   Here are the details: WHAT 4 – 5 hour float down the Buffalo National River WHEN & WHERE Sunday June 13th. We will be leaving at 10am from the Parking Lot of: Gordon’s Motel & Canoe Rental Old Highway 7 Jasper, AR 72641 (870) 446-5252 Jasper is about 30 minutes south of Harrison, AR on Highway 7 South. You are responsible for bumming a ride to/from Gordon’s Motel, but they will be shuttling us to/from the river and providing canoes and a boxed lunch. WHAT ELSE? The float trip is dependent on the weather of course, we won’t be floating down the river in a thunderstorm, however I planned SPS Ozarks around a time of year ideal for floating. We aren’t talking class 5 rapids here, you don’t need any real skill, but you need to be okay with possibly tipping your canoe over once or twice. You can bring your own assorted beverages with you, but glass containers are not allowed on the river. I suggest a small cooler with extra snacks and drinks. Also bring clothing you can get wet in (these SharePoint people can get ornery). HOW DO I SIGN UP? When you register for SharePoint Saturday Ozarks, you will have the option to also sign up for the float trip. Seats are limited though! If you do not intend to go, please do not take someone else’s place.  The cost for the float trip will be about $35 dollars per person (which you are responsible for unless we find a sponsor). The price includes shuttle to/from river, canoe, life jackets, paddles, and boxed lunch. Far out man… how do I register??? You can register for SharePoint Saturday Ozarks by going to http://spsozarks.eventbrite.com/ We are limited to 200 people for the conference and 50 people for the float trip, so register today before we are sold out. Lodging for SharePoint Saturday Ozarks will once again take place at the Hotel Seville: Annex Suites are available for $103.20 This is So Groovy.. How can I help? I’m glad you asked! We are still looking for a few sponsors and one or two more speakers. If you are interested please let me know!  You can find out more information at http://www.sharepointsaturday.org/ozarks Hey… wait a minute…. what exactly IS SharePoint man??? Come to SharePoint Saturday Ozarks and find out!!  See you guys there!

    Read the article

  • Introducción a ENUM (E.164 Number Mapping)

    - by raul.goycoolea
    E.164 Number Mapping (ENUM o Enum) se diseñó para resolver la cuestión de como se pueden encontrar servicios de internet mediante un número telefónico, es decir cómo se pueden usar los los teléfonos, que solamente tienen 12 teclas, para acceder a servicios de Internet. La parte más básica de ENUM es por tanto la convergencia de las redes del STDP y la IP; ENUM hace que pueda haber una correspondencia entre un número telefónico y un identificador de Internet. En síntesis, Enum es un conjunto de protocolos para convertir números E.164 en URIs, y viceversa, de modo que el sistema de numeración E.164 tenga una función de correspondencia con las direcciones URI en Internet. Esta función es necesaria porque un número telefónico no tiene sentido en el mundo IP, ni una dirección IP tiene sentido en las redes telefónicas. Así, mediante esta técnica, las comunicaciones cuyo destino se marque con un número E.164, puedan terminar en el identificador correcto (número E.164 si termina en el STDP, o URI si termina en redes IP). La solución técnica de mirar en una base de datos cual es el identificador de destino tiene consecuencias muy interesantes, como que la llamada se pueda terminar donde desee el abonado llamado. Esta es una de las características que ofrece ENUM : el destino concreto, el terminal o terminales de terminación, no lo decide quien inicia la llamada o envía el mensaje sino la persona que es llamada o recibe el mensaje, que ha escrito sus preferencias en una base de datos. En otras palabras, el destinatario de la llamada decide cómo quiere ser contactado, tanto si lo que se le comunica es un email, o un sms, o telefax, o una llamada de voz. Cuando alguien quiera llamarle a usted, lo que tiene que hacer el llamante es seleccionar su nombre (el del llamado) en la libreta de direcciones del terminal o marcar su número ENUM. Una aplicación informática obtendrá de una base de datos los datos de contacto y disponibilidad que usted decidió. Y el mensaje le será remitido tal como usted especificó en dicha base de datos. Esto es algo nuevo que permite que usted, como persona llamada, defina sus preferencias de terminación para cualquier tipo de contenido. Por ejemplo, usted puede querer que todos los emails le sean enviados como sms o que los mensajes de voz se le remitan como emails; las comunicaciones ya no dependen de donde esté usted o deque tipo de terminal utiliza (teléfono, pda, internet). Además, con ENUM usted puede gestionar la portabilidad de sus números fijos y móviles. ENUM emplea una técnica de búsqueda indirecta en una base de datos que tiene los registros NAPTR ("Naming Authority Pointer Resource Records" tal como lo define el RFC 2915), y que utiliza el número telefónico Enum como clave de búsqueda, para obtener qué URIs corresponden a cada número telefónico. La base de datos que almacena estos registros es del tipo DNS.Si bien en uno de sus diversos usos sirve para facilitar las llamadas de usuarios de VoIP entre redes tradicionales del STDP y redes IP, debe tenerse en cuenta que ENUM no es una función de VoIP sino que es un mecanismo de conversión entre números/identificadores. Por tanto no debe ser confundido con el uso normal de enrutar las llamadas de VoIP mediante los protocolos SIP y H.323. ENUM puede ser muy útil para aquellas organizaciones que quieran tener normalizada la manera en que las aplicaciones acceden a los datos de comunicación de cada usuario. FundamentosPara que la convergencia entre el Sistema Telefónico Disponible al Público (STDP) y la Telefonía por Internet o Voz sobre IP (VoIP) y que el desarrollo de nuevos servicios multimedia tengan menos obstáculos, es fundamental que los usuarios puedan realizar sus llamadas tal como están acostumbrados a hacerlo, marcando números. Para eso, es preciso que haya un sistema universal de correspondencia de número a direcciones IP (y viceversa) y que las diferentes redes se puedan interconectar. Hay varias fórmulas que permiten que un número telefónico sirva para establecer comunicación con múltiples servicios. Una de estas fórmulas es el Electronic Number Mapping System ENUM, normalizado por el grupo de tareas especiales de ingeniería en Internet (IETF, Internet engineering task force), del que trata este artículo, que emplea la numeración E.164, los protocolos y la infraestructura telefónica para acceder indirectamente a diferentes servicios. Por tanto, se accede a un servicio mediante un identificador numérico universal: un número telefónico tradicional. ENUM permite comunicar las direcciones del mundo IP con las del mundo telefónico, y viceversa, sin problemas. Antes de entrar en mayores profundidades, conviene dar una breve pincelada para aclarar cómo se organiza la correspondencia entre números o URI. Para ello imaginemos una llamada que se inicia desde el servicio telefónico tradicional con destino a un número Enum. En ENUM Público, el abonado o usuario Enum a quien va destinada lallamada, habrá decidido incluir en la base de datos Enum uno o varios URI o números E.164, que forman una lista con sus preferencias para terminar la llamada. Y el sistema como se explica más adelante, elegirá cual es el número o URI adecuado para dicha terminación. Por tanto como resultado de la consulta a la base dedatos Enum siempre se da una relación unívoca entre el número Enum marcado y el de terminación, conforme a los deseos de la persona llamada.Variedades de ENUMUna posible fuente de confusión cuando se trata sobre ENUM es la variedad de soluciones o sistemas que emplean este calificativo. Lo habitual es que cuando se haga una referencia a ENUM se trate de uno de los siguientes casos: ENUM Público: Es la visión original de ENUM, como base de datos pública, parecida a un directorio, donde el abonado "opta" a ser incluido en la base de datos, que está gestionada en el dominio e164.arpa, delegando a cada país la gestión de la base de datos y la numeración. También se conoce como ENUM de usuario. Carrier ENUM, o ENUM Infraestructura, o de Operador: Cuando grupos de operadores proveedores de servicios de comunicaciones electrónicas acuerdan compartir la información de los abonados por medio de ENUM mediante acuerdos privados. En este caso son los operadores quienes controlan la información del abonado en vez de hacerlo (optar) los propios abonados. Carrier ENUM o ENUM de Operador también se conoce como Infrastructure ENUM o ENUM Infraestructura, y está siendo normalizado por IETF para la interconexión de VoIP (mediante acuerdos de peering). Como se explicará en la correspondiente sección, también se puede utilizar para la portabilidad o conservación de número. ENUM Privado: Un operador de telefonía o de VoIP, o un ISP, o un gran usuario, puede utilizar las técnicas de ENUM en sus redes y en las de sus clientes sin emplear DNS públicos, con DNS privados o internos. Resulta fácil imaginar como puede utilizarse esta técnica para que compañías multinacionales, o bancos, o agencias de viajes, tengan planes de numeración muy coherentes y eficaces. Cómo funciona ENUMPara conocer cómo funciona Enum, le remitimos a la página correspondiente a ENUM Público, puesto que esa variedad de Enum es la típica, la que dió lugar a todos los procedimientos y normas de IETF .Más detalles sobre: @page { margin: 0.79in } P { margin-bottom: 0.08in } H4 { margin-bottom: 0.08in } H4.ctl { font-family: "Lohit Hindi" } A:link { so-language: zxx } -- ENUM Público. En esta página se explica con cierto detalle como funciona Enum Carrier ENUM o ENUM de Operador ENUM Privado Normas técnicas: RFC 2915: NAPTR RR. The Naming Authority Pointer (NAPTR) DNS Resource Record RFC 3761: ENUM Protocol. The E.164 to Uniform Resource Identifiers (URI) Dynamic Delegation Discovery System (DDDS) Application (ENUM). (obsoletes RFC 2916). RFC 3762: Usage of H323 addresses in ENUM Protocol RFC 3764: Usage of SIP addresses in ENUM Protocol RFC 3824: Using E.164 numbers with SIP RFC 4769: IANA Registration for an Enumservice Containing Public Switched Telephone Network (PSTN) Signaling Information RFC 3026: Berlin Liaison Statement RFC 3953: Telephone Number Mapping (ENUM) Service Registration for Presence Services RFC 2870: Root Name Server Operational Requirements RFC 3482: Number Portability in the Global Switched Telephone Network (GSTN): An Overview RFC 2168: Resolution of Uniform Resource Identifiers using the Domain Name System Organizaciones relacionadas con ENUM RIPE - Adimistrador del nivel 0 de ENUM e164.arpa. ITU-T TSB - Unión Internacional de Telecomunicaciones ETSI - European Telecommunications Standards Institute VisionNG - Administrador del rango ENUM 878-10 IETF ENUM Chapter

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • Small adventure game

    - by Nick Rosencrantz
    I'm making a small adventure game where the player can walk through Dungeons and meet scary characters: The whole thing is 20 java classes and I'm making this a standalone frame while it could very well be an applet I don't want to make another applet since I might want to recode this in C/C++ if the game or game engine turns out a success. The engine is the most interesting part of the game, it controls players and computer-controlled characters such as Zombies, Reptile Warriors, Trolls, Necromancers, and other Persons. These persons can sleep or walk around in the game and also pick up and move things. I didn't add many things so I suppose that is the next thing to do is to add things that can get used now that I already added many different types of walking persons. What do you think I should add and do with things in the game? The things I have so far is: package adventure; /** * The data type for things. Subclasses will be create that takes part of the story */ public class Thing { /** * The name of the Thing. */ public String name; /** * @param name The name of the Thing. */ Thing( String name ) { this.name = name; } } public class Scroll extends Thing { Scroll (String name) { super(name); } } class Key extends Thing { Key (String name) { super(name); } } The key is the way to win the game if you figure our that you should give it to a certain person and the scroll can protect you from necromancers and trolls. If I make this game more Dungeons and Dragons-inspired, do you think will be any good? Any other ideas that you think I could use here? The Threadwhich steps time forward and wakes up persons is called simulation. Do you think I could do something more advanced with this class? package adventure; class Simulation extends Thread { private PriorityQueue Eventqueue; Simulation() { Eventqueue = new PriorityQueue(); start(); } public void wakeMeAfter(Wakeable SleepingObject, double time) { Eventqueue.enqueue(SleepingObject, System.currentTimeMillis()+time); } public void run() { while(true) { try { sleep(5); //Sov i en halv sekund if (Eventqueue.getFirstTime() <= System.currentTimeMillis()) { ((Wakeable)Eventqueue.getFirst()).wakeup(); Eventqueue.dequeue(); } } catch (InterruptedException e ) { } } } } And here is the class that makes up the actual world: package adventure; import java.awt.*; import java.net.URL; /** * Subklass to World that builds up the Dungeon World. */ public class DungeonWorld extends World { /** * * @param a Reference to adventure game. * */ public DungeonWorld(Adventure a) { super ( a ); // Create all places createPlace( "Himlen" ); createPlace( "Stairs3" ); createPlace( "IPLab" ); createPlace( "Dungeon3" ); createPlace( "Stairs5" ); createPlace( "C2M2" ); createPlace( "SANS" ); createPlace( "Macsal" ); createPlace( "Stairs4" ); createPlace( "Dungeon2" ); createPlace( "Datorsalen" ); createPlace( "Dungeon");//, "Ljushallen.gif" ); createPlace( "Cola-automaten", "ColaAutomat.gif" ); createPlace( "Stairs2" ); createPlace( "Fable1" ); createPlace( "Dungeon1" ); createPlace( "Kulverten" ); // Create all connections between places connect( "Stairs3", "Stairs5", "Down", "Up" ); connect( "Dungeon3", "SANS", "Down", "Up" ); connect( "Dungeon3", "IPLab", "West", "East" ); connect( "IPLab", "Stairs3", "West", "East" ); connect( "Stairs5", "Stairs4", "Down", "Up" ); connect( "Macsal", "Stairs5", "South", "Norr" ); connect( "C2M2", "Stairs5", "West", "East" ); connect( "SANS", "C2M2", "West", "East" ); connect( "Stairs4", "Dungeon", "Down", "Up" ); connect( "Datorsalen", "Stairs4", "South", "Noth" ); connect( "Dungeon2", "Stairs4", "West", "East" ); connect( "Dungeon", "Stairs2", "Down", "Up" ); connect( "Dungeon", "Cola-automaten", "South", "North" ); connect( "Stairs2", "Kulverten", "Down", "Up" ); connect( "Stairs2", "Fable1", "East", "West" ); connect( "Fable1", "Dungeon1", "South", "North" ); // Add things // --- Add new things here --- getPlace("Cola-automaten").addThing(new CocaCola("Ljummen cola")); getPlace("Cola-automaten").addThing(new CocaCola("Avslagen Cola")); getPlace("Cola-automaten").addThing(new CocaCola("Iskall Cola")); getPlace("Cola-automaten").addThing(new CocaCola("Cola Light")); getPlace("Cola-automaten").addThing(new CocaCola("Cuba Cola")); getPlace("Stairs4").addThing(new Scroll("Scroll")); getPlace("Dungeon3").addThing(new Key("Key")); Simulation sim = new Simulation(); // Load images to be used as appearance-parameter for persons Image studAppearance = owner.loadPicture( "Person.gif" ); Image asseAppearance = owner.loadPicture( "Asse.gif" ); Image trollAppearance = owner.loadPicture( "Loke.gif" ); Image necromancerAppearance = owner.loadPicture( "Necromancer.gif" ); Image skeletonAppearance = owner.loadPicture( "Reptilewarrior.gif" ); Image reptileAppearance = owner.loadPicture( "Skeleton.gif" ); Image zombieAppearance = owner.loadPicture( "Zombie.gif" ); // --- Add new persons here --- new WalkingPerson(sim, this, "Peter", studAppearance); new WalkingPerson(sim, this, "Zombie", zombieAppearance ); new WalkingPerson(sim, this, "Zombie", zombieAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "John", studAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Skeleton", skeletonAppearance ); new WalkingPerson(sim, this, "Sean", studAppearance ); new WalkingPerson(sim, this, "Reptile", reptileAppearance ); new LabAssistant(sim, this, "Kate", asseAppearance); new LabAssistant(sim, this, "Jenna", asseAppearance); new Troll(sim, this, "Troll", trollAppearance); new Necromancer(sim, this, "Necromancer", necromancerAppearance); } /** * * The place where persons are placed by default * *@return The default place. * */ public Place defaultPlace() { return getPlace( "Datorsalen" ); } private void connect( String p1, String p2, String door1, String door2) { Place place1 = getPlace( p1 ); Place place2 = getPlace( p2 ); place1.addExit( door1, place2 ); place2.addExit( door2, place1 ); } } Thanks

    Read the article

  • Are Chromebooks the New Netbooks, and What Does That Mean?

    - by Chris Hoffman
    Netbooks — small, cheap, slow laptops — were once very popular. They fell out of favor — people bought them because they seemed cheap and portable, but the actual experience was lackluster. Most netbooks now sit unused. Windows netbooks have vanished from stores today, but there’s a new super-cheap laptop — the Chromebook. Chromebook sales numbers are impressive, but their usage statistics tell a different story. Are Chromebooks just the new netbook? The Problem With Netbooks Netbooks seemed appealing, especially in an age before tablets and lightweight ultrabooks. You could buy a netbook for $200 or so and have a portable device that let you get on the Internet. The name “netbook” spelled that out — it was a portable device for getting on the ‘net. They weren’t really that great. The original netbook was a lightweight Asus Eee PC that ran Linux alone and had a small amount of fast flash storage. Netbooks eventually ran heavier Windows XP operating systems — Windows Vista was out, but it was just too bloated to run on netbooks. Manufacturers added slow magnetic hard drives, bloatware, and even DVD drives! They couldn’t run most Windows software very well. The build quality was poor and their keyboards were tiny and cramped. People liked the idea of a lightweight device that let them get on the Internet and loved the cheap price, but the actual experience wasn’t great. Chromebook Sales Chromebook sales numbers seem surprisingly high. NPD reported that Chromebooks were 21% of all notebooks sold in the US in 2013. If you combine laptop and tablet sales into a single statistic, Chromebooks were 9.6% of all those devices sold. That’s 2/3 as many Chromebooks sold as iPads in the US! Of Amazon’s best-selling laptop computers, two of the top three are Chromebooks. These definitely look like successful products. Unlike netbooks, Chromebooks are taking off in a big way in the education market. Many schools are buying Chromebooks for their students instead of more expensive Windows laptops. They’re easier to manage and lock down than Windows laptops, but — more importantly for cash-strapped schools — they’re very cheap. Netbooks never had this sort of momentum in schools. Chromebook Usage Statistics Here’s where the rosy picture of Chromebooks starts to become more realistic. StatCounter’s browser usage statistics show how widely used different operating systems are. For example, Windows 7 has the highest share with 35.71% of web activity in April, 2014. The chart doesn’t even show Chrome OS at all, although there is an “Other” number near the bottom. Click the Download Data link to download a CSV file and we can view more detailed information. Chrome OS only accounted for 0.38% of web usage in April, 2014. Desktop Linux, which people often shrug at, accounted for 1.52% in the same month. To its credit, Chrome OS usage has increased. Chromebooks were widely mocked back in November, 2013 when the sales numbers came out. After all, they only accounted for 0.11% of web usage globally in November, 2013! But Chrome OS numbers have been improving: Nov, 2013: 0.11% Dec, 2013: 0.22% Jan, 2014: 0.31% Feb, 2014: 0.35% Mar, 2014: 0.36% Apr, 2014: 0.38% Chrome OS is climbing, but it’s definitely still in the “Other” category. It isn’t as high as we’d expect to see it with those types of sales numbers. Chromebooks vs. Netbooks Chromebooks are more limited devices than traditional PCs. You can do quite a few things, but you have to do it all using Chrome or Chrome apps. Most people won’t be enabling developer mode and installing a Linux desktop. You don’t have access to the powerful desktop software available for Windows and even Mac OS X. On the other hand, these Chromebooks are less compromised than netbooks in many ways. They come with a lightweight operating system designed for portable, mobile devices. They don’t come packed with any bloatware, like the bloatware you’ll find on competing Windows PCs and the original netbooks. They’re cheaper because the manufacturer doesn’t have to pay for a Windows license. There’s no need for antivirus software weighing the operating system down. They’re larger than the original netbooks, with many of them being 11.6-inches instead of the original 8-inch bodies many older netbooks came with. They have larger, more comfortable keyboards and fast solid-state storage. Really, Chromebooks are what netbooks wanted to be. People didn’t buy netbooks to use typical Windows software — they just wanted a lightweight PC. Of course, for many people, the real successor to netbooks is tablets. If all you want is a portable device to throw in a bag so you can get online, maybe a tablet is better. Where Does This Leave Chromebooks? So, are Chromebooks the new netbooks? It’s a bit early to answer that question. Chromebooks are definitely not out of the competition — their sales look good and their usage share is increasing. On the other hand, Chrome OS is still pretty far behind. They’re not catching fire like tablets did. Maybe netbooks were just before their time and Chromebooks were what they were always meant to be. Just as Microsoft’s Windows XP tablets failed, Windows XP netbooks also failed. Tablets took off with a more refined operating system on better hardware years later. “Netbooks” — or Chromebooks — are now taking off with a more purpose-built operating system on better hardware, too. It’s hard to count Chromebooks out because they provide a much better experience than netbooks ever did. If you’re one of the people who wants to use old Windows desktop apps on your portable laptop, you may think netbooks were better — but most people don’t want that. But maybe people either want a full desktop PC experience or a full mobile tablet experience. Is there a place for a laptop with a keyboard that can only view websites? We’ll have to wait and see. Image Credit: Kevin Jarret on Flickr, Clive Darra on Flickr, Sean Freese on Flickr

    Read the article

  • Creating a dynamic proxy generator with c# – Part 4 – Calling the base method

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors   The plan for calling the base methods from the proxy is to create a private method for each overridden proxy method, this will allow the proxy to use a delegate to simply invoke the private method when required. Quite a few helper classes have been created to make this possible so as usual I would suggest download or viewing the code at http://rapidioc.codeplex.com/. In this post I’m just going to cover the main points for when creating methods. Getting the methods to override The first two notable methods are for getting the methods. private static MethodInfo[] GetMethodsToOverride<TBase>() where TBase : class {     return typeof(TBase).GetMethods().Where(x =>         !methodsToIgnore.Contains(x.Name) &&                              (x.Attributes & MethodAttributes.Final) == 0)         .ToArray(); } private static StringCollection GetMethodsToIgnore() {     return new StringCollection()     {         "ToString",         "GetHashCode",         "Equals",         "GetType"     }; } The GetMethodsToIgnore method string collection contains an array of methods that I don’t want to override. In the GetMethodsToOverride method, you’ll notice a binary AND which is basically saying not to include any methods marked final i.e. not virtual. Creating the MethodInfo for calling the base method This method should hopefully be fairly easy to follow, it’s only function is to create a MethodInfo which points to the correct base method, and with the correct parameters. private static MethodInfo CreateCallBaseMethodInfo<TBase>(MethodInfo method) where TBase : class {     Type[] baseMethodParameterTypes = ParameterHelper.GetParameterTypes(method, method.GetParameters());       return typeof(TBase).GetMethod(        method.Name,        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        baseMethodParameterTypes,        null     ); }   /// <summary> /// Get the parameter types. /// </summary> /// <param name="method">The method.</param> /// <param name="parameters">The parameters.</param> public static Type[] GetParameterTypes(MethodInfo method, ParameterInfo[] parameters) {     Type[] parameterTypesList = Type.EmptyTypes;       if (parameters.Length > 0)     {         parameterTypesList = CreateParametersList(parameters);     }     return parameterTypesList; }   Creating the new private methods for calling the base method The following method outline how I’ve created the private methods for calling the base class method. private static MethodBuilder CreateCallBaseMethodBuilder(TypeBuilder typeBuilder, MethodInfo method) {     string callBaseSuffix = "GetBaseMethod";       if (method.IsGenericMethod || method.IsGenericMethodDefinition)     {                         return MethodHelper.SetUpGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     }     else     {         return MethodHelper.SetupNonGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     } } The CreateCallBaseMethodBuilder is the entry point method for creating the call base method. I’ve added a suffix to the base classes method name to keep it unique. Non Generic Methods Creating a non generic method is fairly simple public static MethodBuilder SetupNonGenericMethod(     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       Type returnType = method.ReturnType;       MethodBuilder methodBuilder = CreateMethodBuilder         (             typeBuilder,             method,             methodName,             methodAttributes,             parameterTypes,             returnType         );       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static MethodBuilder CreateMethodBuilder (     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes,     Type[] parameterTypes,     Type returnType ) { MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, methodAttributes, returnType, parameterTypes); return methodBuilder; } As you can see, you simply have to declare a method builder, get the parameter types, and set the method attributes you want.   Generic Methods Creating generic methods takes a little bit more work. /// <summary> /// Sets up generic method. /// </summary> /// <param name="typeBuilder">The type builder.</param> /// <param name="method">The method.</param> /// <param name="methodName">Name of the method.</param> /// <param name="methodAttributes">The method attributes.</param> public static MethodBuilder SetUpGenericMethod     (         TypeBuilder typeBuilder,         MethodInfo method,         string methodName,         MethodAttributes methodAttributes     ) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName,         methodAttributes);       Type[] genericArguments = method.GetGenericArguments();       GenericTypeParameterBuilder[] genericTypeParameters =         GetGenericTypeParameters(methodBuilder, genericArguments);       ParameterHelper.SetUpParameterConstraints(parameterTypes, genericTypeParameters);       SetUpReturnType(method, methodBuilder, genericTypeParameters);       if (method.IsGenericMethod)     {         methodBuilder.MakeGenericMethod(genericArguments);     }       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static GenericTypeParameterBuilder[] GetGenericTypeParameters     (         MethodBuilder methodBuilder,         Type[] genericArguments     ) {     return methodBuilder.DefineGenericParameters(GenericsHelper.GetArgumentNames(genericArguments)); }   private static void SetUpReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.IsGenericMethodDefinition)     {         SetUpGenericDefinitionReturnType(method, methodBuilder, genericTypeParameters);     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     } }   private static void SetUpGenericDefinitionReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.ReturnType == null)     {         methodBuilder.SetReturnType(typeof(void));     }     else if (method.ReturnType.IsGenericType)     {         methodBuilder.SetReturnType(genericTypeParameters.Where             (x => x.Name == method.ReturnType.Name).First());     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     }             } Ok, there are a few helper methods missing, basically there is way to much code to put in this post, take a look at the code at http://rapidioc.codeplex.com/ to follow it through completely. Basically though, when dealing with generics there is extra work to do in terms of getting the generic argument types setting up any generic parameter constraints setting up the return type setting up the method as a generic All of the information is easy to get via reflection from the MethodInfo.   Emitting the new private method Emitting the new private method is relatively simple as it’s only function is calling the base method and returning a result if the return type is not void. ILGenerator il = privateMethodBuilder.GetILGenerator();   EmitCallBaseMethod(method, callBaseMethod, il);   private static void EmitCallBaseMethod(MethodInfo method, MethodInfo callBaseMethod, ILGenerator il) {     int privateParameterCount = method.GetParameters().Length;       il.Emit(OpCodes.Ldarg_0);       if (privateParameterCount > 0)     {         for (int arg = 0; arg < privateParameterCount; arg++)         {             il.Emit(OpCodes.Ldarg_S, arg + 1);         }     }       il.Emit(OpCodes.Call, callBaseMethod);       il.Emit(OpCodes.Ret); } So in the main method building method, an ILGenerator is created from the method builder. The ILGenerator performs the following actions: Load the class (this) onto the stack using the hidden argument Ldarg_0. Create an argument on the stack for each of the method parameters (starting at 1 because 0 is the hidden argument) Call the base method using the Opcodes.Call code and the MethodInfo we created earlier. Call return on the method   Conclusion Now we have the private methods prepared for calling the base method, we have reached the last of the relatively easy part of the proxy building. Hopefully, it hasn’t been too hard to follow so far, there is a lot of code so I haven’t been able to post it all so please check it out at http://rapidioc.codeplex.com/. The next section should be up fairly soon, it’s going to cover creating the delegates for calling the private methods created in this post.   Kind Regards, Sean.

    Read the article

  • top tweets WebLogic Partner Community – November 2011

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity glassfish GlassFish Marek’s JAX-RS 2.0 content from Devoxx 2011 – bit.ly/sp2NJO chriscmuir chriscmuir New blog post: ADF bug: missing af:column borders in af:table for IE7 – t.co/81np2jug chriscmuir chriscmuir Reading: Oracle’s ADF Rich Client User Interface (RCUI) Guidelines – oracle.com/webfolder/ux/m… netbeans NetBeans Team Bottlenecks be gone! #Java Performance Tuning workshop in Munich w Kirk Pepperdine, Nov 29-Dec 2: ow.ly/7Akh5 OracleBlogs OracleBlogs Creating ADF Faces Comamnd Button at Runtime ow.ly/1fM9dE alexismp Alexis MP blogged "GlassFish Back from Devoxx 2011, Mature Java EE 6 and EE 7 well on its way" – bit.ly/rP8LV0 JDeveloper JDeveloper & ADF Usage of jQuery in ADF dlvr.it/x3t84 20 hours ago Favorite Retweet Reply OTNArchBeat OTNArchBeat Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive – Dec 1 – 11am PT / 2pm ET bit.ly/t61W4G oraclepartners ORCL PartnerNetwork Brand new Oracle WebLogic 12c will launch on December 1, 10AM PT with a global Webcast highlighting salient… t.co/aflQQ3IX OracleBlogs OracleBlogs JDeveloper and ADF at UKOUG t.co/2CQTiB9n fnimphiu Frank Nimphius Attending UKOUG? All ADF sessions at a glance: t.co/TcMNTMXp 21 Nov Favorite Retweet Reply JDeveloper JDeveloper & ADF Free Webinar ‘ADF Task Flows for Beginners’, information and registration t.co/66jXnGgo via javafx4you javafx4you Java Developer Workshop #2 – Dec 1, 2011 @ Oracle Aoyama center in Tokyo t.co/8p9q3W2B AMIS_Services AMIS Services #vacature #Oracle #ADF ontwikkelaars. bit.ly/AMISADF Gun jezelf een nieuwe uitdaging? Meer op: dld.bz/azZ5N OracleBlogs OracleBlogs Launch Invitation: Introducing Oracle WebLogic Server 12c t.co/bRxCKwAk fnimphiu Frank Nimphius The brand new WebLogic 12c will be released on December 1st 2011 !!! Register for online launch event t.co/pPScg4Xh glassfish GlassFish Announcing Oracle WebLogic 12c – t.co/qh8TdFEl AdamBien Adam Bien Sun Coding Conventions–The Only Standard (Stop Inventing): Code written according to the Sun Coding Conventions… t.co/qaUWp5Mz wlscommunity WebLogic Community Launch Invitation: Introducing Oracle WebLogic Server 12c wp.me/p1LMIb-4y andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Custom Exception Registration for ADF BC EO Attribute fb.me/1m6nXQD52 MNEMONIC01 Michel Schildmeijer Blog by Michel Schildmeijer: "Oracle WebLogic 12c has been announced" bit.ly/vk6WQL glassfish GlassFish Tab Sweep – Coherence, SBT for GlassFish, OSGi in question, Java EE plugins, … t.co/tVIL95lj OracleBlogs OracleBlogs JavaFX 2.0 at Devoxx 2011 ow.ly/1fJ5iT JDeveloper JDeveloper & ADF Experimenting with ADF BC Application Module Pool Tuning dlvr.it/wjLC1 OracleWebLogic Oracle WebLogic Brand New #WebLogic 12c Launch Event, Dec 1 10am PT. Hasan Rizvi, SVP Fusion Middleware. Developer session. bit.ly/weblogic12clau… JDeveloper JDeveloper & ADF PopUp and Esc/Cancel operations. ADF 11g dlvr.it/whrmC JDeveloper JDeveloper & ADF BPM Workspace: issue loading ADF task flows t.co/vk1gKPx5 OpenJDK OpenJDK Kelly O’Hair — OpenJDK B24 Available : t.co/1bFws6Nw JDeveloper JDeveloper & ADF Oracle ADF setting Task flow to use same page definition file of caller page t.co/9k6UIoYZ JDeveloper JDeveloper & ADF Master Detail Data presentation and CRUD Operations. Detail records in an Editable Popup. ADF 11g t.co/H8uudR0Y JDeveloper JDeveloper & ADF Entity Attribute Validation Rule (Business Rule) based on Master View Object Attribute Example ADF 11g t.co/1agxEQcZ oracletechnet Justin Kestelyn Webcast: Oracle WebLogic Server 12c Launch/Developer Deep-Dive (Dec. 1) t.co/OVBdGKzC JDeveloper JDeveloper & ADF How to render different node icons for different tree levels dlvr.it/wY2jL JDeveloper JDeveloper & ADF Query Component with ‘dynamic’ view criteria dlvr.it/wXlF1 JDeveloper JDeveloper & ADF How to play Flash .swf file in Oracle ADF application t.co/zaSONWAH Devoxx Devoxx Duke at the #Devoxx 2011 Noxx Party! pic.twitter.com/bVJWyu1Z brhubart Bob Rhubart Adam Leftik: JavaEE adoption continues to increase, reaching 40+ million downloads this year. #qconsf11 JDeveloper JDeveloper & ADF Free #ODTUG Seminar – #ADF Task Flows for Beginners – sign up today. www3.gotomeeting.com/register/13372… java Java New Project: OpenJFX j.mp/tI4k3s #javafx #openjdk #devoxx << JavaFX is open source! /via frankmunz Frank Munz WebLogic 12c launch event Dec 1st. t.co/jQKinBqN brhubart Bob Rhubart Spring to Java EE Migration | David Heffelfinger feedly.com/k/td8ccG odtug ODTUG Mark your calendars and register for our upcoming webinars: bit.ly/dWKG1C ADF Task Flows & Measuring Scalability & Performance w/TCP myfear Markus Eisele Anybody willing to take this question? Using #JavaMail with #Weblogic Server bit.ly/stJOET AMIS_Services AMIS Services 20-22 december #training #Oracle JHeadstart #11g, productief ontwikkelen met ADF. Schrijf je in op: amis.nl/trainingen/ora… AdamBien Adam Bien Stress Testing Java EE 6 Applications – Free Article In Free Java Magazine: In the November / December 2011 issu… bit.ly/vmzKkc java Java New Tech Article: Spring to #JavaEE Migration t.co/0EvdHNxb OracleBlogs OracleBlogs WebLogic Java record SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 t.co/Eu1b6ZE0 OracleBlogs OracleBlogs What Is JavaFX? ow.ly/1frb6I OTNArchBeat OTNArchBeat The openJDK Windows Binary Download | Adam Bien ow.ly/7fRiG wlscommunity WebLogic Community WebLogic – Java record – SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 glassfish GlassFish "youtube.com/java" blogs.oracle.com/theaquarium/en… OTNArchBeat OTNArchBeat Beta Testing Concludes: 1Z1-102 – "Oracle WebLogic Server 11g: System Administration I" (Oracle Certification) ow.ly/7fJCl wlscommunity WebLogic Community A deep dive in Oracle WebLogic! @ Contribute – November 29th, 2011 Kontich Belgium wp.me/p1LMIb-4u glassfish GlassFish Gartner’s Latest Enterprise Application Server Magic Quadrant – Oracle’s leadership t.co/aYDqipD8 OpenJDK OpenJDK Terrence Barr – Open sourcing of JavaFX: OpenJFX Project proposed – bit.ly/uKVnEl OpenJDK OpenJDK Maurizio Cimadamore – Testing overload resolution: bit.ly/vgXAbQ java Java Java User Groups Roundup, November 2011 : t.co/hea6vVnk /via @robilad << in German JavaSpotlight The Java Spotlight Java Spotlight Episode 54: Stuart Marks on the Coinification of JDK7 goo.gl/fb/3UXoM OTNArchBeat OTNArchBeat Article Series: Migrating Spring to Java EE 6 | Arun Gupta bit.ly/twUJtz glassfish GlassFish New Java EE 6 Hands-On lab, Devoxx-approved! bit.ly/vup5uE java Java Brian Goetz’s enthusiasm for Java is palpable! #devoxx interview adf_emg ADF EMG "ADF testing with a mock framework" – what is a mock framework? Visit the forum and see: groups.google.com/forum/#!topic/… java Java Taping a bunch of interviews today with Java experts at #devoxx. View on Parleys.com tomorrow. glassfish GlassFish New screencast to configure and run a cross-machine cluster using GlassFish 3.1.1 in < 7 mins faissalb.blogspot.com/2011/11/glassf… (via @bfaissal) glassfish GlassFish Oracle Contributor Agreements – New Home! bit.ly/tD2eLo OTNArchBeat OTNArchBeat Java Magazine – by and for the Java Community- inaugural issue bit.ly/tTv8UD OTNArchBeat OTNArchBeat The Heroes of Java: Michael Hüttermann | @MyFear bit.ly/rYYOFe javafx4you javafx4you Development with #JavaFX on #Linux j.mp/uOpe69 #not_for_the_faint_of_heart java Java Contribute Technical Questions for Java Experts at #devoxx bit.ly/up2cN0 netbeans NetBeans Team A simple REST service using #NetBeans 7, #Java Servlet, and #JAXB: t.co/pKkufsD8 AdamBien Adam Bien The most beautiful, and portable slide of the whole #jaxcon for "Die Hard Java EE 6"session checked-in: kenai.com/projects/javae… jaxlondon JAX London Mark Little’s (@nmcl) excellent keynote from #jaxlondon ‘Middleware Everywhere…’ is available in full – t.co/8vBmtDJ1 AdamBien Adam Bien Calculator sample from "Die Hard Java EE 6" #jaxcon session checked-in: t.co/0UqaULfg OTNArchBeat OTNArchBeat ADF Faces – a logic bomb in the order of bean instantiations | @ChrisCMuir bit.ly/vjqRaZ OracleBlogs OracleBlogs ODI 11g y JMS Queue de Weblogic ow.ly/1fzfQJ frankmunz Frank Munz Which WebLogic book do you recommend? Review of S. Alapati’s WebLogic 11g Administration Handbook. bit.ly/rP0RtW JDeveloper JDeveloper & ADF PageFlowScope with Unbounded Task Flows: the magic sauce for multi-browser-tab support in JDeveloper ADF applications dlvr.it/vNFgn OracleBlogs OracleBlogs 3 New ADF Insider Essential training videos published. ow.ly/1fz94q OracleBlogs OracleBlogs Weblogic Server 11gR1 PS2: Administration Essentials book and eBook t.co/ykzwIaqs OracleBlogs OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 wlscommunity WebLogic Community Oracle Weblogic Server 11gR1 PS2: Administration Essentials book and eBook andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Intern… andrejusb.blogspot.com/2011/11/stress… OracleBlogs OracleBlogs Frank Nimphius presenting a full day of Oracle ADF in Switzerland ow.ly/1fxU78 java Java #JavaEE and #GlassFish: #JavaOne11 Slides, Demos, Replays, Hands-on Labs t.co/tLM0ehrD OracleBlogs OracleBlogs weblogic.security.SecurityInitializationException: Authentication for user weblogic denied ow.ly/1fxmiu glassfish GlassFish The Last Migration – GlassFish Wiki : t.co/Dc5FT1SJ OTNArchBeat OTNArchBeat A Successful Year of @MiddlewareMagic t.co/amcGGTTk OracleWebLogic Oracle WebLogic Unbeatable Performance for your Cloud Applications with Exalogic, #OracleCoherence and #WebLogic. ow.ly/7lYKm OTNArchBeat OTNArchBeat Stress Testing Oracle ADF BC Applications – Passivation and Activation | @AndrejusB bit.ly/sASssL OTNArchBeat OTNArchBeat Review: "Oracle Weblogic Server 11gR1 PS2: Administration Essentials" by Michel Schildmeijer | @MyFear t.co/ll6ra0J9 OTNArchBeat OTNArchBeat GlassFish 3.1.2 themes and features | The Aquarium bit.ly/vVqr9r Andre_van_Dalen Andre van Dalen Masterclass: Advanced Oracle ADF 11g lnkd.in/M_45Pi AdamBien Adam Bien The "lunch" edition of RentACar is pushed into: kenai.com/projects/javae… #wjax AdamBien Adam Bien In munich, room munich at #wjax. Welcome to #javaee workshop. Gather your questions. 15 minutes to go lucasjellema Lucas Jellema Review by Markus of Michel’s book: t.co/41U9wvOb In short: valuable for novice WLS users, maybe not so much for die-hard WLS admin. biemond Edwin Biemond “@myfear: [blog] #Review: "#Weblogic Server 11gR1 PS2: Administration Essentials" t.co/LsODcb3e” got the same conclusion on amazon glassfish GlassFish Practical advice for deploying Lift apps to GlassFish: bit.ly/t3KUml glassfish GlassFish The unbearable lightness of GlassFish t.co/v9307SEJ javafx4you javafx4you Building Java EE applications in JavaFX: JavaFX 2.0, FXML and Spring j.mp/tiMDUh andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Passiv… andrejusb.blogspot.com/2011/11/stress… wlscommunity WebLogic Community “@AMIS_Services: Follow @amis_services To Win a copy of SOA Suite 11g Handbook by @lucasjellema dld.bz/axD22 pls RT” excellent book! glassfish GlassFish GlassFish 3.1.2 themes and features bit.ly/uEc6uZ biemond Edwin Biemond Weblogic pre-sales exam was hard, you really need to know the versions , upgrade path and have a score above 80% monkchips James Governor The Rise and Fall and Rise of Java. JAX 2011 london keynote. how big data and the web are floating the boat. slidesha.re/u3Kzlo glassfish GlassFish Tab Sweep – Jersey, Hudson, GlassFish Hosting, GC’s compared, Spring to JavaEE, Modularity, … bit.ly/u9Cc30 oracletechnet Justin Kestelyn Oracle Tuxedo: A renewed acquaintance t.co/gp0mmf20 OTNArchBeat OTNArchBeat Oracle Enterprise Pack for Eclipse, OEPE 11.1.1.8 bit.ly/tC3eKp OracleBlogs OracleBlogs NetBeans HTML Editor and Groovy Editor in a Multiview Component (Part 2) ow.ly/1ftCeI myfear Markus Eisele [blog] #Oracle 2008 – 2011 in Gartners Magic Quadrant for Enterprise Application Servers t.co/2Bs1vgMZ myfear Markus Eisele [blog] #EclipseCon Europe – Java 7 in the Enterprise goo.gl/fb/r80df #ece2011 #java7 javafx4you javafx4you JavaFX 2.0 for Mac build b07 (developer preview) is available for download j.mp/vSwmBP Enjoy! #JavaFX #Mac OracleBlogs OracleBlogs A deep dive in Oracle WebLogic! @ Contribute November 29th, 2011 Kontich Belgium ow.ly/1fsEZs arungupta Arun Gupta #JavaEE7 slides from #jaxlondon and #jfall11 now available: slidesha.re/sh4iFq AdamBien Adam Bien Just checked-in the results of the #jaxlondon community night (somehow beer related): kenai.com/projects/javae… glassfish GlassFish GlassFish Podcast Episode #080 – User Stories, Part 3: Adam Bien and Sean Comerford (ESPN) blogs.oracle.com/glassfishpodca… glassfish GlassFish Story: t.co/jQPqihJb using GlassFish blogs.oracle.com/stories/entry/… "3000+ requests/sec" and more enterprisejava Java EE Mentions New blog post WebLogic deployment status checks for CI wp.me/pOOSs-F #weblogic #continuousintegration /vi… bit.ly/uZz0fk The become a member in the WebLogic Partner Community please first login at http://partner.oracle.com and then visit: http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Android 1.5 - 2.1 Search Activity affects Parent Lifecycle

    - by pacoder
    Behavior seems consistent in Android 1.5 to 2.1 Short version is this, it appears that when my (android search facility) search activity is fired from the android QSR due to either a suggestion or search, UNLESS my search activity in turn fires off a VISIBLE activity that is not the parent of the search, the search parents life cycle changes. It will NOT fire onDestroy until I launch a visible activity from it. If I do, onDestroy will fire fine. I need a way to get around this behavior... The long version: We have implemented a SearchSuggestion provider and a Search activity in our application. The one thing about it that is very odd is that if the SearchManager passes control to our custom Search activity, AND that activity does not create a visible Activity the Activity which parented the search does not destroy (onDestroy doesn't run) and it will not until we call a visible Activity from the parent activity. As long as our Search Activity fires off another Activity that gets focus the parent activity will fire onDestroy when I back out of it. The trick is that Activity must have a visual component. I tried to fake it out with a 'pass through' Activity so that my Search Activity could fire off another Intent and bail out but that didn't work either. I have tried setting our SearchActivity to launch singleTop and I also tried setting its noHistory attribute to true, tried setResult(RESULT_OK) in SearchACtivity prior to finish, bunch of other things, nothing is working. This is the chunk of code in our Search Activity onCreate. Couple of notes about it: If Intent is Action_Search (user typed in their own search and didn't pick a suggestion), we display a list of results as our Search Activity is a ListActivity. In this case when the item is picked, the Search Activity closes and our parent Activity does fire onDestroy() when we back out. If Intent is Action_View (user picked a suggestion) when type is "action" we fire off an Intent that creates a new visible Activity. In this case same thing, when we leave that new activity and return to the parent activity, the back key does cause the parent activity to fire onDestroy when leaving. If Intent is Action_View (user picked a suggestion) when type is "pitem" is where the problem lies. It works fine (the method call focuses an item on the parent activity), but when the back button is hit on the parent activity onDestroy is NOT called. IF after this executes I pick an option in the parent activity that fires off another activity and return to the parent then back out it will fire onDestroy() in the parent activity. Note that the "action" intent ends up running the exact same method call as "pitem", it just bring up a new visual Activity first. Also I can take out the method call from "pitem" and just finish() and the behavior is the same, the parent activity doesn't fire onDestroy() when backed out of. if (Intent.ACTION_SEARCH.equals(queryAction)) { this.setContentView(_layoutId); String searchKeywords = queryIntent.getStringExtra(SearchManager.QUERY); init(searchKeywords); } else if(Intent.ACTION_VIEW.equals(queryAction)){ Bundle bundle = queryIntent.getExtras(); String key = queryIntent.getDataString(); String userQuery = bundle.getString(SearchManager.USER_QUERY); String[] keyValues = key.split("-"); if(keyValues.length == 2) { String type = keyValues[0]; String value = keyValues[1]; if(type.equals("action")) { Intent intent = new Intent(this, EventInfoActivity.class); Long longKey = Long.parseLong(value); intent.putExtra("vo_id", longKey); startActivity(intent); finish(); } else if(type.equals("pitem")) { Integer id = Integer.parseInt(value); _application._servicesManager._mapHandlerSelector.selectInfoItem(id); finish(); } } } It just seems like something is being held onto and I can't figure out what it is, in all cases the Search Activity fires onDestroy() when finish() is called so it is definitely going away. If anyone has any suggestions I'd be most appreciative. Thanks, Sean Overby

    Read the article

  • ASP.Net MVC 2 Auto Complete Textbox With Custom View Model Attribute & EditorTemplate

    - by SeanMcAlinden
    In this post I’m going to show how to create a generic, ajax driven Auto Complete text box using the new MVC 2 Templates and the jQuery UI library. The template will be automatically displayed when a property is decorated with a custom attribute within the view model. The AutoComplete text box in action will look like the following:   The first thing to do is to do is visit my previous blog post to put the custom model metadata provider in place, this is necessary when using custom attributes on the view model. http://weblogs.asp.net/seanmcalinden/archive/2010/06/11/custom-asp-net-mvc-2-modelmetadataprovider-for-using-custom-view-model-attributes.aspx Once this is in place, make sure you visit the jQuery UI and download the latest stable release – in this example I’m using version 1.8.2. You can download it here. Add the jQuery scripts and css theme to your project and add references to them in your master page. Should look something like the following: Site.Master <head runat="server">     <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title>     <link href="../../Content/Site.css" rel="stylesheet" type="text/css" />     <link href="../../css/ui-lightness/jquery-ui-1.8.2.custom.css" rel="stylesheet" type="text/css" />     <script src="../../Scripts/jquery-1.4.2.min.js" type="text/javascript"></script>     <script src="../../Scripts/jquery-ui-1.8.2.custom.min.js" type="text/javascript"></script> </head> Once this is place we can get started. Creating the AutoComplete Custom Attribute The auto complete attribute will derive from the abstract MetadataAttribute created in my previous post. It will look like the following: AutoCompleteAttribute using System.Collections.Generic; using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Attributes {     public class AutoCompleteAttribute : MetadataAttribute     {         public RouteValueDictionary RouteValueDictionary;         public AutoCompleteAttribute(string controller, string action, string parameterName)         {             this.RouteValueDictionary = new RouteValueDictionary();             this.RouteValueDictionary.Add("Controller", controller);             this.RouteValueDictionary.Add("Action", action);             this.RouteValueDictionary.Add(parameterName, string.Empty);         }         public override void Process(ModelMetadata modelMetaData)         {             modelMetaData.AdditionalValues.Add("AutoCompleteUrlData", this.RouteValueDictionary);             modelMetaData.TemplateHint = "AutoComplete";         }     } } As you can see, the constructor takes in strings for the controller, action and parameter name. The parameter name will be used for passing the search text within the auto complete text box. The constructor then creates a new RouteValueDictionary which we will use later to construct the url for getting the auto complete results via ajax. The main interesting method is the method override called Process. With the process method, the route value dictionary is added to the modelMetaData AdditionalValues collection. The TemplateHint is also set to AutoComplete, this means that when the view model is parsed for display, the MVC 2 framework will look for a view user control template called AutoComplete, if it finds one, it uses that template to display the property. The View Model To show you how the attribute will look, this is the view model I have used in my example which can be downloaded at the end of this post. View Model using System.ComponentModel; using Mvc2Templates.Attributes; namespace Mvc2Templates.Models {     public class TemplateDemoViewModel     {         [AutoComplete("Home", "AutoCompleteResult", "searchText")]         [DisplayName("European Country Search")]         public string SearchText { get; set; }     } } As you can see, the auto complete attribute is called with the controller name, action name and the name of the action parameter that the search text will be passed into. The AutoComplete Template Now all of this is in place, it’s time to create the AutoComplete template. Create a ViewUserControl called AutoComplete.ascx at the following location within your application – Views/Shared/EditorTemplates/AutoComplete.ascx Add the following code: AutoComplete.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <%     var propertyName = ViewData.ModelMetadata.PropertyName;     var propertyValue = ViewData.ModelMetadata.Model;     var id = Guid.NewGuid().ToString();     RouteValueDictionary urlData =         (RouteValueDictionary)ViewData.ModelMetadata.AdditionalValues.Where(x => x.Key == "AutoCompleteUrlData").Single().Value;     var url = Mvc2Templates.Views.Shared.Helpers.RouteHelper.GetUrl(this.ViewContext.RequestContext, urlData); %> <input type="text" name="<%= propertyName %>" value="<%= propertyValue %>" id="<%= id %>" class="autoComplete" /> <script type="text/javascript">     $(function () {         $("#<%= id %>").autocomplete({             source: function (request, response) {                 $.ajax({                     url: "<%= url %>" + request.term,                     dataType: "json",                     success: function (data) {                         response(data);                     }                 });             },             minLength: 2         });     }); </script> There is a lot going on in here but when you break it down it’s quite simple. Firstly, the property name and property value are retrieved through the model meta data. These are required to ensure that the text box input has the correct name and data to allow for model binding. If you look at line 14 you can see them being used in the text box input creation. The interesting bit is on line 8 and 9, this is the code to retrieve the route value dictionary we added into the model metada via the custom attribute. Line 11 is used to create the url, in order to do this I created a quick helper class which looks like the code below titled RouteHelper. The last bit of script is the code to initialise the jQuery UI AutoComplete control with the correct url for calling back to our controller action. RouteHelper using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Views.Shared.Helpers {     public static class RouteHelper     {         const string Controller = "Controller";         const string Action = "Action";         const string ReplaceFormatString = "REPLACE{0}";         public static string GetUrl(RequestContext requestContext, RouteValueDictionary routeValueDictionary)         {             RouteValueDictionary urlData = new RouteValueDictionary();             UrlHelper urlHelper = new UrlHelper(requestContext);                          int i = 0;             foreach(var item in routeValueDictionary)             {                 if (item.Value == string.Empty)                 {                     i++;                     urlData.Add(item.Key, string.Format(ReplaceFormatString, i.ToString()));                 }                 else                 {                     urlData.Add(item.Key, item.Value);                 }             }             var url = urlHelper.RouteUrl(urlData);             for (int index = 1; index <= i; index++)             {                 url = url.Replace(string.Format(ReplaceFormatString, index.ToString()), string.Empty);             }             return url;         }     } } See it in action All you need to do to see it in action is pass a view model from your controller with the new AutoComplete attribute attached and call the following within your view: <%= this.Html.EditorForModel() %> NOTE: The jQuery UI auto complete control expects a JSON string returned from your controller action method… as you can’t use the JsonResult to perform GET requests, use a normal action result, convert your data into json and return it as a string via a ContentResult. If you download the solution it will be very clear how to handle the controller and action for this demo. The full source code for this post can be downloaded here. It has been developed using MVC 2 and Visual Studio 2010. As always, I hope this has been interesting/useful. Kind Regards, Sean McAlinden.

    Read the article

< Previous Page | 16 17 18 19 20 21  | Next Page >