Search Results

Search found 8824 results on 353 pages for 'cloud virtualization vmware density scalability'.

Page 205/353 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Good examples of MapServer / OpenLayers

    - by MarkJ
    I want to convince some clients to use MapServer and OpenLayers. Please can anyone suggest attractive websites to show off the possiblities! The clients will be impressed by: A density map (otherwise known as a heat map, colour-shaded grid coverage, contour plot...). The ability for the user to download the underlying data for the density map, restricted to the area being viewed, in some format such as netCDF. Standard OpenLayers stuff. Zooming, panning, scale bar, overview map... Different base layers. Could be WMS, Google, Bing... Searching for a placename, map is panned to display the place. MapServer.org seems to be down right now :( But from memory their examples didn't have the "wow" factor. The OpenLayers examples demonstrate only one or two features per example - I want something to wow the clients by showing all the capabilities in one example. PS If you have good examples that use some other open source tools, post them by all means. But just JavaScript please: customer says no rich client.

    Read the article

  • Top tweets SOA Partner Community – October 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds http://wp.me/p10C8u-vA leonsmiers ?Cant wait to test it >> 't waiRT @OracleSOA: Case Management patterns, session coverage from #OOW #OracleBPM #ACM #BPM http://bit.ly/OdcZL6 Danilo Schmiedel Bye bye San Francisco. #oow was a great conference in a wonderful city! Thanks! @soacommunity pic.twitter.com/lcYSe9xC OPITZ CONSULTING ?The Journey towards #Oracle #BPM @OpenWorld 2012 - Slides by @t_winterberg & H. Normann: http://ow.ly/edkWE #oow demed Full house at the SOA Customer Advisory Board! #oow12 http://instagr.am/p/QX9B8eLMLS/ Danilo Schmiedel "@whitehorsesnl: Had some great talks with the BPM guys at the DEMOgrounds. It is one of the best things at #oow" -> I agree!! @soacommunity Mark Simpson ?Fusion Middleware Global Innovation Awards: nice to pick up a soa and bpm with our customer. #oow Mark Simpson ?RT @SOASimone: #oraclesoa #oow hands on lab fully booked pic.twitter.com/pwI94Ew7 <--quick, provision some more compute power on the cloud! Oracle SOA ?Join us for BPM and Analytics: Process Dashboards. BAM, and Intelligent OptimizationMoscone South - 308#OracleBPM #OOW Oracle SOA ?Real-time public safety demo! License plate recognition and processing in London via Oracle Event Processing. #oow pic.twitter.com/WufesDBq Marc ?Nice session on customer success stories on #SOA11g on with @SOASimone Pro and cons and architectural overview. #oow pic.twitter.com/bzuhsujm Lucas Jellema Full length Keynote on Middleware #oow : http://medianetwork.oracle.com/video/player/1873556035001 … #oow_amis OracleBlogs ?Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers? http://ow.ly/2stVQ0 OracleBlogs ?Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications http://ow.ly/2suhzf Ronald Luttikhuizen ?VENNSTER BLOG | Presentations at OpenWorld 2012 | http://blog.vennster.nl/2012/10/presentations-at-openworld-2012.html … Andrejus Baranovskis @dschmied @soacommunity next OOW for sure, and may be SOA community event ! @soacommunity Danilo Schmiedel ?@andrejusb Thanks Andrejus - I really enjoyed having a session with you at #oow. When is next time :-) ? @soacommunity Lionel Dubreuil ?@soacommunity #oow12 Today-1:15pm-Marriott Marquis Salon 7 Jump-starting Integration with Oracle Foundation Pack http://bit.ly/QKKJzF Ronald Luttikhuizen ?Impression from our fault handling session in OSB and SOA Suite from the audience @soacommunity @gschmutz #oow pic.twitter.com/WSg1Z89E Marc Nice session on Oracle Virtual Assembly for #SOA11g, @soacommunity Works with #exalogic but not required SOA Community ?Send your #soacommunity #oow pictures and blog posts @soacommunity or http://www.facebook.com/soacommunity Enjoy OOW ;-) Jon petter hjulstad Oracle BPM- Big leap forward in 11.1.1.7 ! Whitehorses ?Common BPM Use Cases from Oracle #bpm #oow pic.twitter.com/ofOv04EF Whitehorses ?Oracle BPM 11.1.1.7 top new features. Interesting #oow #oowbenelux pic.twitter.com/HY9QN5un SOA Community Industrialized SOA - topic of Business Technology Magazine http://wp.me/p10C8u-vi orclateamsoa ?A-Team Blog #ateam: The curious case of SOA Human tasks' automatic completion http://ow.ly/1mq6YU Simone Geib Look for this sign #oow #oraclesoa pic.twitter.com/MJsPV4PO Lucas Jellema My summary of Larry Ellison's keynote at #oow on the AMIS Blog: http://technology.amis.nl/2012/10/01/oow-2012-larry-ellisons-keynote-announcements-exa-cloud-database/ … #oow_amis gschmutz ?Join my #oow session "Five Cool Use Cases for the Spring Component" to see the power of Spring and SOA Suite combined! Moscone 310 - 3:15 PM Ronald Luttikhuizen Thanks to @soacommunity for great SOA/BPM dinner event yesterday night! #oow pic.twitter.com/v7x3i0DC OracleBlogs ?OSB, Service Callouts and OQL http://ow.ly/2sq6B2 OracleBlogs ?Cloud and On-Premises Applications Integration using Oracle Integration Adapters http://ow.ly/2sqiDy OracleBlogs ?Adapters, SOA Suite and More @Openworld 2012 http://ow.ly/2srdTg Eric Elzinga ?OSB, Service Callouts and OQL - Part 3, http://see.sc/JodzEx #oracleservicebus Donatas Valys interesting articles about soa industrialization to read #soa #industrialization http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html … gschmutz ?“@techsymp: 2012 Symposium Presentation Download Page Now Available! 75% of presentations published. http://www.servicetechsymposium.com ” find mine there.. Oracle BPM Customer Experience and BPM – From Efficiency to Engagement #bpm #oraclebpm #processmanagement #socialbpm http://pub.vitrue.com/Tahi SOA Community ?@soacommunity SOA Community Newsletter September 2012 http://wp.me/p10C8u-wa SOA Community again again again.... it is Oracle Open World 2012 http://wp.me/p10C8u-wk OracleBlogs ?SOA Proactive support http://ow.ly/2smrSJ demed ?@gschmutz on NoSQL at @techsymp http://lockerz.com/s/247601661 demed ?Just finished "#BigData and its impact on #SOA" talk @techsymp. Really enjoyed getting out of beaten path. #london #oep http://lockerz.com/s/247636974 OTNArchBeat ?Need help selling SOA to business stakeholders? Give them this free eBook. #soasuite http://pub.vitrue.com/hsQY SOA Community top Tweets SOA Partner Community &ndash; September 2012 http://wp.me/p10C8u-vc SOA Community Move Data into the grid for scalable, predictable response times http://wp.me/p10C8u-vv ServiceTechSymposium ?The September issue of the Service Technology Magazine is now published with six new items! Read them at http://www.servicetechmag.com Marc ?Reviewed @Packt_OracleFMW new book on SOA11g administration! Very good ! http://tinyurl.com/8pzd5ww SOA Community ?BPM Solution Catalogue&ndash;promote your process templates http://wp.me/p10C8u-vt OTNArchBeat ?BPM ADF Task forms: Checking whether the current user is in a BPM Swimlane | @ChrisKarlChan http://pub.vitrue.com/aPMG OTNArchBeat ?Cloud, automation drive new growth in SOA governance market | @JoeMcKendrick http://pub.vitrue.com/hNPv Simon Haslam ?Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Simon Haslam ?The #ukoug2012 agenda is "go, go, go!" (as Murray would say!) http://2012.ukoug.org/agendagrid Germán Gazzoni SOA Spezial II verfügbar – Industralized SOA: Die überarbeitete und ergänzte Neuauflage des SOA Spezial Sonderhe... http://bit.ly/PAWwN9 Oracle SOA ?Flip thru new interactive "Oracle SOA Suite eBook-In the Customers Words" #middleware #soa #oraclesoa http://pub.vitrue.com/NzFZ SOA Community Follow SOA Community on Facebook http://www.facebook.com/soacommunity #soacommunity #opn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • AndEngine Physics Editor loading level

    - by Khawar Raza
    I have created a .pes file using PhysicsEditor and imported as xml and have added to my project. When I parsed it and created bodies, it is showing strange behavior. The mapping of bodies that I created in PhysicsEditor is totally different what I see in my application means the shapes I draw in PhysicsEditor are rendering differently in my app. Here is my xml and code to parse and add bodies to scene. PhysicsEditor XML file: <?xml version="1.0" encoding="UTF-8"?> <!-- created with http://www.physicseditor.de --> <bodydef version="1.0"> <bodies numBodies="1"> <body name="car_path" dynamic="false" numFixtures="1"> <fixture density="2" friction="1" restitution="0" filter_categoryBits="1" filter_groupIndex="0" filter_maskBits="65535" isSensor="false" type="POLYGON" numPolygons="20" > <polygon numVertexes="6"> <vertex x="277.0000" y="152.0000" /> <vertex x="356.0000" y="172.0000" /> <vertex x="413.0000" y="194.0000" /> <vertex x="476.0000" y="223.0000" /> <vertex x="173.0000" y="232.0000" /> <vertex x="174.0000" y="148.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="1556.0000" y="221.0000" /> <vertex x="1142.0000" y="94.0000" /> <vertex x="1255.0000" y="-15.0000" /> <vertex x="1554.0000" y="-14.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="-192.0000" y="177.0000" /> <vertex x="-888.0000" y="139.0000" /> <vertex x="-549.0000" y="-125.0000" /> </polygon> <polygon numVertexes="6"> <vertex x="1762.0000" y="24.0000" /> <vertex x="1862.0000" y="27.0000" /> <vertex x="1927.0000" y="68.0000" /> <vertex x="2078.0000" y="222.0000" /> <vertex x="1643.0000" y="212.0000" /> <vertex x="1642.0000" y="38.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="-1150.0000" y="146.0000" /> <vertex x="-1776.0000" y="140.0000" /> <vertex x="-1476.0000" y="-25.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="-2799.0000" y="103.0000" /> <vertex x="-2684.0000" y="223.0000" /> <vertex x="-3112.0000" y="256.0000" /> <vertex x="-3108.0000" y="98.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="2422.0000" y="222.0000" /> <vertex x="3120.0000" y="-71.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="1142.0000" y="94.0000" /> <vertex x="1556.0000" y="221.0000" /> <vertex x="709.0000" y="226.0000" /> <vertex x="911.0000" y="93.0000" /> </polygon> <polygon numVertexes="6"> <vertex x="-2111.0000" y="89.0000" /> <vertex x="-2067.0000" y="94.0000" /> <vertex x="-2002.0000" y="139.0000" /> <vertex x="-2344.0000" y="223.0000" /> <vertex x="-2196.0000" y="112.0000" /> <vertex x="-2153.0000" y="91.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="105.0000" y="233.0000" /> <vertex x="-94.0000" y="178.0000" /> <vertex x="69.0000" y="106.0000" /> <vertex x="91.0000" y="104.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="-2002.0000" y="139.0000" /> <vertex x="-2067.0000" y="94.0000" /> <vertex x="-2032.0000" y="110.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="-1150.0000" y="146.0000" /> <vertex x="105.0000" y="233.0000" /> <vertex x="-2344.0000" y="223.0000" /> <vertex x="-2002.0000" y="139.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="413.0000" y="194.0000" /> <vertex x="356.0000" y="172.0000" /> <vertex x="376.0000" y="176.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="105.0000" y="233.0000" /> <vertex x="-192.0000" y="177.0000" /> <vertex x="-94.0000" y="178.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="105.0000" y="233.0000" /> <vertex x="-1150.0000" y="146.0000" /> <vertex x="-888.0000" y="139.0000" /> <vertex x="-192.0000" y="177.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="-3112.0000" y="256.0000" /> <vertex x="-2684.0000" y="223.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="1556.0000" y="221.0000" /> <vertex x="1643.0000" y="212.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="709.0000" y="226.0000" /> <vertex x="173.0000" y="232.0000" /> <vertex x="476.0000" y="223.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="2078.0000" y="222.0000" /> <vertex x="2422.0000" y="222.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="105.0000" y="233.0000" /> <vertex x="173.0000" y="232.0000" /> </polygon> </fixture> </body> </bodies> <metadata> <format>1</format> <ptm_ratio></ptm_ratio> </metadata> </bodydef> And here is my code: private void loadLevel() { // TODO Auto-generated method stub AssetManager assetManager = getAssets(); try { InputStream stream = assetManager.open("tmx/path1.xml"); if(stream != null) { try { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setValidating(false); dbf.setIgnoringComments(false); dbf.setIgnoringElementContentWhitespace(true); dbf.setNamespaceAware(true); DocumentBuilder db = null; db = dbf.newDocumentBuilder(); Document document = db.parse(stream); Element root = document.getDocumentElement(); NodeList bodiesNodeList = root.getElementsByTagName("bodies"); for(int i = 0; i < bodiesNodeList.getLength(); i++) { BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.StaticBody; bodyDef.fixedRotation = true; Element bodiesElement = (Element)bodiesNodeList.item(i); NodeList bodyList = bodiesElement.getElementsByTagName("body"); for(int j = 0; j < bodyList.getLength(); j++) { Element bodyElement = (Element)bodyList.item(j); Body body = mPhysicsWorld.createBody(bodyDef); NodeList fixtureList = bodyElement.getElementsByTagName("fixture"); for(int k = 0; k < fixtureList.getLength(); k++) { Element fixtureElement = (Element)fixtureList.item(k); FixtureDef fixtureDef = new FixtureDef(); if(fixtureElement != null) { String density = fixtureElement.getAttribute("density"); String friction = fixtureElement.getAttribute("friction"); String restitution = fixtureElement.getAttribute("restitution"); fixtureDef = PhysicsFactory.createFixtureDef(Float.parseFloat(density), Float.parseFloat(friction), Float.parseFloat(restitution)); } NodeList polygonList = fixtureElement.getElementsByTagName("polygon"); if(polygonList != null && polygonList.getLength() > 0) { for(int m = 0; m < polygonList.getLength(); m++) { PolygonShape polyShape = new PolygonShape(); Element polygonElement = (Element)polygonList.item(m); NodeList vertexList = polygonElement.getElementsByTagName("vertex"); if(vertexList != null && vertexList.getLength() > 0) { Vector2 [] vectors = new Vector2[vertexList.getLength()]; for(int n = 0; n < vertexList.getLength(); n++) { Element vertexElement = (Element)vertexList.item(n); if(vertexElement != null) { float x = Float.parseFloat(vertexElement.getAttribute("x")); float y = Float.parseFloat(vertexElement.getAttribute("y")); vectors[n] = new Vector2(x/PIXEL_TO_METER_RATIO_DEFAULT, y/PIXEL_TO_METER_RATIO_DEFAULT); } } polyShape.set(vectors); fixtureDef.shape = polyShape; } body.createFixture(fixtureDef); } } } mScene.attachChild(bgSprite); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(bgSprite, body, false, false)); } } } catch(Exception e) { e.printStackTrace(); } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } Any idea where I am going wrong?

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • How to set the position of a sprite within a box2d body?

    - by Frank
    Basically I have 2 polygons for my body. When I add a sprite for userData, the position of the texture isn't where I want it to be. What I want to do is adjust the position of the texture within the body. Here's the code sample of where I am setting this: CCSpriteSheet *sheet = (CCSpriteSheet*) [self getChildByTag:kTagSpriteSheet]; CCSprite *pigeonSprite = [CCSprite spriteWithSpriteSheet:sheet rect:CGRectMake(0,0,40,32)]; [sheet addChild:pigeonSprite z:0 tag:kPigeonSprite]; pigeonSprite.position = ccp( p.x, p.y); bodyDef.position.Set(p.x/PTM_RATIO, p.y/PTM_RATIO); bodyDef.userData = sprite; b2Body *body = world->CreateBody(&bodyDef); b2CircleShape dynamicCircle; dynamicCircle.m_radius = .25f; dynamicCircle.m_p.Set(0.0f, 1.0f); // Define the dynamic body fixture. b2FixtureDef circleDef; circleDef.shape = &dynamicCircle; circleDef.density = 1.0f; circleDef.friction = 0.3f; body->CreateFixture(&circleDef); b2Vec2 vertices[3]; vertices[0].Set(-0.5f, 0.0f); vertices[1].Set(0.5f, 0.0f); vertices[2].Set(0.0f, 1.0f); b2PolygonShape triangle; triangle.Set(vertices, 3); b2FixtureDef triangleDef1; triangleDef1.shape = &triangle; triangleDef1.density = 1.0f; triangleDef1.friction = 0.3f; body->CreateFixture(&triangleDef1);

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Why should the "prime-based" hashcode implmentation be used instead of the "naive" one?

    - by Wilhelm
    I have seen that a prime number implmentation of the GetHashCode function is being recommend, for example here. However using the following code (in VB, sorry), it seems as if that implementation gives the same hash density as a "naive" xor implementation. If the density is the same, I would suppose there is the same probability of cllision in both implementations. Am I missing anything on why is the prime approach preferred? I am supossing that if the hash code is a byte I do not lose generality for the integer case. Sub Main() Dim XorHashes(255) As Integer Dim PrimeHashes(255) As Integer For i = 0 To 255 For j = 0 To 255 For k = 0 To 255 XorHashes(GetXorHash(i, j, k)) += 1 PrimeHashes(GetPrimeHash(i, j, k)) += 1 Next Next Next For i = 0 To 255 Console.WriteLine("{0}: {1}, {2}", i, XorHashes(i), PrimeHashes(i)) Next Console.ReadKey() End Sub Public Function GetXorHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Return CByte((valueOne Xor valueTwo Xor valueThree) Mod 256) End Function Public Function GetPrimeHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Dim TempHash = 17 TempHash = 31 * TempHash + valueOne TempHash = 31 * TempHash + valueTwo TempHash = 31 * TempHash + valueThree Return CByte(TempHash Mod 256) End Function

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Add gridlines to a output plot in Mathemaitca

    - by xslittlegrass
    I'm trying to add gridlines to a output density plot in Mathematica. The plot is generated by a long calculation in Mathematica and when I do the plot I forget to add the Mesh-True options. I don't want to do all the calculation and generate the plot again since it takes a long time. Is that possible to add the gridlines or mesh lines to plot ONLY using the output plot at hand? For example, If I have a plot p. Is it possible to add the mesh lines ONLY manipulate p? In a ordinary one dimensional plot, this will work p1 = Plot[Sin[x], {x, -3, 3}]; Insert[p1, GridLines -> Automatic, -1] But when I try the density plot, it seems the gridlines is always under the plot, and can be seen only at the image margin area. p2 = DensityPlot[Sin[x + y^2], {x, -3, 3}, {y, -2, 2}, PlotRangePadding -> 0.2]; Insert[p2, GridLines -> Automatic, -1] Updata The Mesh option on the output plot will not work because Mesh is not a options of a Graphics: Show[p2,Mesh->True] will give a message "an unrecoginzed option name(Mesh) was encountered while rendering a Graphics" Thanks.

    Read the article

  • Does the "Supporting Multiple Screens" document contradict itself?

    - by Neil Traft
    In the Supporting Multiple Screens document in the Android Dev Guide, some example screen configurations are given. One of them states that the small-ldpi designation is given to QVGA (240x320) screens with a physical size of 2.6"-3.0". According to this DPI calculator, a 2.8" QVGA display equates to 143 dpi. However, further down the page the document explicitly states that all screens over 140 dpi are considered "medium" density. So which is it, ldpi or mdpi? Is this a mistake? Does anyone know what the HTC Tattoo or similar device actually reports? I don't have access to any devices like this. Also, with the recent publishing of this document, I'm glad to see we finally have an explicit statement of the exact DPI ranges of the three density categories. But why haven't we been given the same for the small, medium, and large screen size categories? I'd like to know the exact ranges for all these. Thanks in advance for your help!

    Read the article

  • Choropleth mapping issue in R

    - by chasec
    I am trying to follow the tutorial described here: http://www.thisisthegreenroom.com/2009/choropleths-in-r/ The below code executes, but it is either not matching my dataset with the maps_counties data properly, or it isn't plotting it in the order I would expect. For example, the resulting areas for the greater NYC area show no density while random counties in PA show the highest density. The general format of my data table is: county state count fairfield connecticut 17 hartford connecticut 6 litchfield connecticut 3 new haven connecticut 12 ... ... westchester new york 70 yates new york 1 luzerne pennsylvania 1 Note this data is in order by state and then county and includes data for CT, NJ, NY, & PA. First, I read in my data set: library(maps) library(RColorBrewer) d <- read.table("gissum.txt", sep="\t", header=TRUE) #Concatenate state and county info to match maps library d$stcon <- paste(d$state, d$county, sep=",") #Color bins colors = brewer.pal(5, "PuBu") d$colorBuckets <- as.factor(as.numeric(cut(d$count,c(0,10,20,30,40,50,300)))) Here is my matching mapnames <- map("county",plot=FALSE)[4]$names colorsmatched <- d$colorBuckets [na.omit(match(mapnames ,d$stcon))] Plotting: map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = colors[d$colorBuckets[na.omit(match(mapnames ,d$stcon))]] ,fill = TRUE ,resolution = 0 ,lty = 0 ,lwd= 0.5 ) map("state" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE ,lty=1 ,lwd=2 ) map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE , lty=1 , lwd=.5 ) title(main="Respondent Home ZIP Codes by County") I am sure I am missing something basic re: the order in which the maps function plots items - but I can't seem to figure it out. Thanks for the help. Please let me know if you need any more information.

    Read the article

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • The Oracle Enterprise Linux Software and Hardware Ecosystem

    - by sergio.leunissen
    It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux. As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing. Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities. The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via [email protected] Chip Manufacturers Intel, Intel Enabled Server Acceleration Alliance AMD Server vendors Cisco Unified Computing System Dawning Dell Egenera Fujitsu HP Huawei IBM NEC Sun/Oracle Storage Systems, Volume Management and File Systems 3Par Compellent EMC VPLEX FalconStor Fusion-io Hitachi Data Systems HP Storage Array Systems Lustre Network Appliance OCFS2 PillarData Symantec Veritas Storage Foundation Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand Brocade Emulex Mellanox QLogic Voltaire SOA and Middleware ActiveState ActivePerl, ActivePython Tibco Zend Backup, Recovery & Replication Arkeia Network Backup Suite BakBone NetVault CommVault Simpana 8 EMC Networker, Replication Manager FalconStor Continuous Data Protector HP Data Protector NetApp Snapmanager Quest LiteSpeed Engine Steeleye Data Replication, Disaster Recovery Symantec NetBackup, Veritas Volume Replicator, Symantec Backup Exec Zmanda Amanda Enterprise Data Center Automation BMC CA Unicenter HP Server Automation (formerly Opsware), System Management Homepage Oracle Enterprise Manager Ops Center Quest Vizioncore vFoglight Pro TeamQuest Manager Clustering & High Availability FUJITSU x10sure NEC Express Cluster X Steeleye Lifekeeper Symantec Cluster Server Univa UniCluster Virtualization Platforms and Cloud Providers Amazon EC2 Citrix XenServer Rackspace Cloud VirtualBox VMWare ESX Security Management ArcSight: Enterprise Security Manager, Logger CA Access Control Centrify Suite Ecora Auditor FoxT Manager Likewise: Unix Account Management Lumension Endpoint Management and Security Suite QualysGuard Suite Quest Privilege Manager McAfee Application Control, Change ControlIntegrity Monitor, Integrity Control, PCI Pro Solidcore S3 Symantec Enterprise Security Manager (ESM) Tripwire Trusted Computer Solutions

    Read the article

  • Geek Fun: Virtualized Old School Windows – Windows 95

    - by Matthew Guay
    Last week we enjoyed looking at Windows 3.1 running in VMware Player on Windows 7.  Today, let’s upgrade our 3.1 to 95, and get a look at how most of us remember Windows from the 90’s. In this demo, we’re running the first release of Windows 95 (version 4.00.950) in VMware Player 3.0 running on Windows 7 x64.  For fun, we ran the 95 upgrade on the 3.1 virtual machine we built last week. Windows 95 So let’s get started.  Here’s the first setup screen.  For the record, Windows 95 installed in about 15 minutes or less in VMware in our test. Strangely, Windows 95 offered several installation choices.  They actually let you choose what extra parts of Windows to install if you wished.  Oh, and who wants to run Windows 95 on your “Portable Computer”?  Most smartphones today are more powerful than the “portable computers” of 95. Your productivity may vastly increase if you run Windows 95.  Anyone want to switch? No, I don’t want to restart … I want to use my computer! Welcome to Windows 95!  Hey, did you know you can launch programs from the Start button? Our quick spin around Windows 95 reminded us why Windows got such a bad reputation in the ‘90’s for being unstable.  We didn’t even get our test copy fully booted after installation before we saw our first error screen.  Windows in space … was that the most popular screensaver in Windows 95, or was it just me? Hello Windows 3.1!  The UI was still outdated in some spots.   Ah, yes, Media Player before it got 101 features to compete with iTunes. But, you couldn’t even play CDs in Media Player.  Actually, CD player was one program I used almost daily in Windows 95 back in the day. Want some new programs?  This help file about new programs designed for Windows 95 lists a lot of outdated names in tech.    And, you really may want some programs.  The first edition of Windows 95 didn’t even ship with Internet Explorer.   We’ve still got Minesweeper, though! My Computer had really limited functionality, and by default opened everything in a new window.  Double click on C:, and it opens in a new window.  Ugh. But Explorer is a bit more like more modern versions. Hey, look, Start menu search!  If only it found the files you were looking for… Now I’m feeling old … this shutdown screen brought back so many memories … of shutdowns that wouldn’t shut down! But, you still have to turn off your computer.  I wonder how many old monitors had these words burned into them? So there’s yet another trip down Windows memory lane.  Most of us can remember using Windows 95, so let us know your favorite (or worst) memory of it!  At least we can all be thankful for our modern computers and operating systems today, right?  Similar Articles Productive Geek Tips Geek Fun: Remember the Old-School SkiFree Game?Geek Fun: Virtualized old school Windows 3.11Stupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Stupid Geek Tricks: Select Multiple Windows on the TaskbarHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Geek Fun: Virtualized Old School Windows – Windows 95

    - by Matthew Guay
    Last week we enjoyed looking at Windows 3.1 running in VMware Player on Windows 7.  Today, let’s upgrade our 3.1 to 95, and get a look at how most of us remember Windows from the 90’s. In this demo, we’re running the first release of Windows 95 (version 4.00.950) in VMware Player 3.0 running on Windows 7 x64.  For fun, we ran the 95 upgrade on the 3.1 virtual machine we built last week. Windows 95 So let’s get started.  Here’s the first setup screen.  For the record, Windows 95 installed in about 15 minutes or less in VMware in our test. Strangely, Windows 95 offered several installation choices.  They actually let you choose what extra parts of Windows to install if you wished.  Oh, and who wants to run Windows 95 on your “Portable Computer”?  Most smartphones today are more powerful than the “portable computers” of 95. Your productivity may vastly increase if you run Windows 95.  Anyone want to switch? No, I don’t want to restart … I want to use my computer! Welcome to Windows 95!  Hey, did you know you can launch programs from the Start button? Our quick spin around Windows 95 reminded us why Windows got such a bad reputation in the ‘90’s for being unstable.  We didn’t even get our test copy fully booted after installation before we saw our first error screen.  Windows in space … was that the most popular screensaver in Windows 95, or was it just me? Hello Windows 3.1!  The UI was still outdated in some spots.   Ah, yes, Media Player before it got 101 features to compete with iTunes. But, you couldn’t even play CDs in Media Player.  Actually, CD player was one program I used almost daily in Windows 95 back in the day. Want some new programs?  This help file about new programs designed for Windows 95 lists a lot of outdated names in tech.    And, you really may want some programs.  The first edition of Windows 95 didn’t even ship with Internet Explorer.   We’ve still got Minesweeper, though! My Computer had really limited functionality, and by default opened everything in a new window.  Double click on C:, and it opens in a new window.  Ugh. But Explorer is a bit more like more modern versions. Hey, look, Start menu search!  If only it found the files you were looking for… Now I’m feeling old … this shutdown screen brought back so many memories … of shutdowns that wouldn’t shut down! But, you still have to turn off your computer.  I wonder how many old monitors had these words burned into them? So there’s yet another trip down Windows memory lane.  Most of us can remember using Windows 95, so let us know your favorite (or worst) memory of it!  At least we can all be thankful for our modern computers and operating systems today, right?  Similar Articles Productive Geek Tips Geek Fun: Remember the Old-School SkiFree Game?Geek Fun: Virtualized old school Windows 3.11Stupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Stupid Geek Tricks: Select Multiple Windows on the TaskbarHow to Delete a System File in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Slides of my HOL on MySQL Cluster

    - by user13819847
    Hi!Thanks everyone who attended my hands-on lab on MySQL Cluster at MySQL Connect last Saturday.The following are the links for the slides, the HOL instructions, and the code examples.I'll try to summarize my HOL below.Aim of the HOL was to help attendees to familiarize with MySQL Cluster. In particular, by learning: the basics of MySQL Cluster Architecture the basics of MySQL Cluster Configuration and Administration how to start a new Cluster for evaluation purposes and how to connect to it We started by introducing MySQL Cluster. MySQL Cluster is a proven technology that today is successfully servicing the most performance-intensive workloads. MySQL Cluster is deployed across telecom networks and is powering mission-critical web applications. Without trading off use of commodity hardware, transactional consistency and use of complex queries, MySQL Cluster provides: Web Scalability (web-scale performance on both reads and writes) Carrier Grade Availability (99.999%) Developer Agility (freedom to use SQL or NoSQL access methods) MySQL Cluster implements: an Auto-Sharding, Multi-Master, Shared-nothing Architecture, where independent nodes can scale horizontally on commodity hardware with no shared disks, no shared memory, no single point of failure In the architecture of MySQL Cluster it is possible to find three types of nodes: management nodes: responsible for reading the configuration files, maintaining logs, and providing an interface to the administration of the entire cluster data nodes: where data and indexes are stored api nodes: provide the external connectivity (e.g. the NDB engine of the MySQL Server, APIs, Connectors) MySQL Cluster is recommended in the situations where: it is crucial to reduce service downtime, because this produces a heavy impact on business sharding the database to scale write performance higly impacts development of application (in MySQL Cluster the sharding is automatic and transparent to the application) there are real time needs there are unpredictable scalability demands it is important to have data-access flexibility (SQL & NoSQL) MySQL Cluster is available in two Editions: Community Edition (Open Source, freely downloadable from mysql.com) Carrier Grade Edition (Commercial Edition, can be downloaded from eDelivery for evaluation purposes) MySQL Carrier Grade Edition adds on the top of the Community Edition: Commercial Extensions (MySQL Cluster Manager, MySQL Enterprise Monitor, MySQL Cluster Installer) Oracle's Premium Support Services (largest team of MySQL experts backed by MySQL developers, forward compatible hot fixes, multi-language support, and more) We concluded talking about the MySQL Cluster vision: MySQL Cluster is the default database for anyone deploying rapidly evolving, realtime transactional services at web-scale, where downtime is simply not an option. From a practical point of view the HOL's steps were: MySQL Cluster installation start & monitoring of the MySQL Cluster processes client connection to the Management Server and to an SQL Node connection using the NoSQL NDB API and the Connector J In the hope that this blog post can help you get started with MySQL Cluster, I take the opportunity to thank you for the questions you made both during the HOL and at the MySQL Cluster booth. Slides are also on SlideShares: Santo Leto - MySQL Connect 2012 - Getting Started with Mysql Cluster Happy Clustering!

    Read the article

  • Another Questionable Article Online…

    - by Jonathan Kehayias
    At the beginning of the month I blogged about my thoughts on the virtualization feedback provided by SSWUG’s newsletter , and Rich responded with some information on how the incorrect information lead him to making incorrect conclusions.  It seems like every couple of weeks an article, tip, newsletter, whatever is posted by or on a major site that has questionable if not outright incorrect material in it.  Last week MSSQLTips posted SQL Server tempdb one or multiple data files in which...(read more)

    Read the article

  • Oracle Data Integration Solutions and the Oracle EXADATA Database Machine

    - by João Vilanova
    Oracle's data integration solutions provide a complete, open and integrated solution for building, deploying, and managing real-time data-centric architectures in operational and analytical environments. Fully integrated with and optimized for the Oracle Exadata Database Machine, Oracle's data integration solutions take data integration to the next level and delivers extremeperformance and scalability for all the enterprise data movement and transformation needs. Easy-to-use, open and standards-based Oracle's data integration solutions dramatically improve productivity, provide unparalleled efficiency, and lower the cost of ownership.You can watch a video about this subject, after clicking on the link below.DIS for EXADATA Video

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • Systems Solutions at COLLABORATE12

    - by ferhat
    Want to connect with fellow Oracle users and learn more about how to maximize your Oracle software environments with Oracle Systems?   Pack your bags for Las Vegas!   COLLABORATE 12  is right around the corner! COLLABORATE 12 Conference will be held at the Mandalay Bay in Las Vegas, NV 22-26 April, 2012. This is an event designed and delivered by users just like you with sessions, interactive panel discussions and hands-on learning opportunities packed with first-hand experiences, case studies and practical “how-to” content.. This year’s event includes a number of educational sessions and demos for users interested in learning from the experts how to use Oracle Optimized Solutions to get the most out of their Oracle Technology and Application software. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance.  And because they are highly flexible by design,  Oracle Optimized Solutions   can be implemented as an end-to-end solution or easily adapted into existing environments. Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions. Please come by our Exhibition Booth #1273 to see the demos and meet 1-1 with the experts behind a number of  Oracle Optimized Solutions  including those for JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft HCM, Oracle WebCenter, and Oracle Database.  Exhibitor Showcase Booth #1273 DAY TIME TITLE Monday  April 23 6:00 pm - 8:00 pm Welcome Reception in the Exhibitor Showcase Tuesday  April 24 10:15 am - 4:00 pm Exhibitor Showcase Open 1:00 pm - 2:00 pm Dedicated Exhibitor Showcase Time 5:30 pm - 7:00 pm Exhibitor Showcase Happy Hour Wednesday  April 25 10:30 am - 3:00 pm Exhibitor Showcase Open 2:15 pm -3:00 pm Afternoon Break in Exhibitor Showcase  There are also a number of deep dive, educational sessions covering deployment best practices using Oracle’s engineered systems and best-in-class hardware, operating system and virtualization technologies.  Education Sessions DAY TIME TITLE LOCATION Monday  April 23 9:45 am - 10:45 am Architecting and Implementing Backup and Recovery Solutions Surf E Tuesday  April 24 2:00 pm – 3:00 pm Oracle's High Performance Systems for JD Edwards EnterpriseOne Mandalay Bay GH 4:30 pm - 5:30 pm Virtualization Boot Camp: What's New with Oracle VM Server for x86 Mandalay Bay C 9:30 am - 10:30 am Oracle on Oracle VM - Expert Panel Mandalay Bay L Wednesday  April 25 9:30 am - 10:30 am Cloud Computing Directions: Part II Understanding Oracle's Cloud Directions South Seas E  And don’t forget the keynotes and software roadmap sessions! Keynotes and Roadmap Sessions DAY TIME TITLE LOCATION Sunday  April 22 3:20 pm – 4:20 pm Oracle’s Cloud Computing Strategy Breakers B Monday  April 23 11:00 am – 12:00 pm JD Edwards - Vision, Promises and Execution: IT'S THE WAY WE ROLL and Why it Matters! Mandalay Bay A 11:00 am – 12:00 pm PeopleSoft Executive Update and Roadmap Mandalay Bay J 1:15 pm - 2:15 pm Oracle Database - Engineered for Innovation Mandalay Bay L 2:30 pm - 3:30 pm Oracle E-Business Suite Applications Strategy and General Manager Update Mandalay Bay D Tuesday  April 24 9:15 am - 10:15 am IT at Oracle: The Art of IT Transformation to Enable Business Growth Mandalay Bay Ballroom H

    Read the article

  • Today's Links (6/29/2011)

    - by Bob Rhubart
    Event-Driven SOA: Events meet Services | Guido Schmutz Oracle ACE Director Guido Schmutz shows you how to achieve extreme loose coupling within a Service-Oriented Architecture by using event-driven interactions. Misconceptions About Software Architecture | Sanjeev Kumar A concise, to-the-point, and informative article by Sanjeev Kumar. Good Leaders Acknowledge What Can't Be Done - Jeffrey Pfeffer - Harvard Business Review "None of us likes to admit to bad decisions," says Jeffrey Pfeffer. "But imagine how much harder that is for someone who has been chosen to lead a large organization precisely because he or she is thought to have the power to see the future more clearly and chart a wise course." Suboptimal Thinking within Enterprise Architecture | James McGovern McGovern says: "We need to remember that enterprises live and thrive beyond just the current person at the helm." Boundaryless Information Flow | Richard Veryard "If all the boundaries are removed or porous, then the (extended) enterprise or ecosystem becomes like a giant sponge, in which all information permeates the whole," Veryard says. "Some people may think that's a good idea, but it's not what I'd call loose coupling." Coming to a City Near You: Oracle Business Analytics Summits | Rob Reynolds This series of events includes a Technology and Architecture track. New Date for Implementation of Sun Hands-On Course Requirement (Oracle Certification) As announced on the Oracle Certification website, Java Architect, Java Developer, Solaris System Administrator and Solaris Security Administrator certification tracks will include a new mandatory course attendance requirement. VirtualBox 4.0.10 is now available for download | Bob Netherton Netherton shares information on the new release. Updated Technical Best Practices whitepaper | Anthony Shorten The Technical Best Practices whitepaper has been updated with the latest advice. "New advice includes new installation advice, advanced settings, new security settings and advice for both Oracle WebLogic and IBM WebSphere installations," says Shorten. Kscope 11 ADF, AIA and Business Rules | Peter Paul van de Beek Whitehorses Solution Architect Peter Paul van de Beek shares his impressions of KScope11 presentations by Markus Eisele, Sten Vesterli, and Edwin Biemond. Amazon AWS for the learning experience | Andrej Koelewijn "Using AWS changes your expectations how your internal data center should operate," says Koelewijn. BPMN is dead, long live BPEL! (SOA Partner Community Blog) Jürgen Kress shares information -- including a long list of speakers -- for the SOA & BPM Integration Days 2011 conference, October 12th & 13th 2011 in Düsseldorf. InfoQ: HTML5 and the Dawn of Rich Mobile Web Applications James Pearce introduces cross-platform web apps development using HTML5 and web frameworks, such as jQTouch, jQuery Mobile, Sencha Touch, PhoneGap, outlining what makes a good framework. InfoQ: Interview and Book Excerpt: CMMI for Development "Frameworks like TOGAF are used to define an architecture that aligns IT assets and resources to support key business needs and processes of key stakeholders," says SEI's Mike Konrad. "But the individual application systems, capabilities, services, networks, and other IT assets and infrastructure still need to be acquired, developed, or sustained." InfoQ: Architecting a Cloud-Scale Identity Fabric | Eric Olden "The most cited reason for not moving to the cloud is concern about security," says Olden. "In particular, managing user identity and access in the cloud is a tough problem to solve and a big security concern for organizations."

    Read the article

  • Sep 10 Week - I'll be on the West Coast Speaking in Irvine and San Fran

    - by RickHeiges
    In my role as a Solutions Architect for Scalability Experts, I often get to present to customers about the work that we performed. Unfortunately, this is often on short notice and I can't coordinate a trip to participate in a User Group Meeting. Next week, I was able to coordinate my west coast trip to be able to present. I am heading to Irvine at the MTC on Sepember 11 and San Francisco at the MSFT offices on Sep 13 to speak to customers who want to learn more about SQL Server 2012.To register for...(read more)

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >