Search Results

Search found 3118 results on 125 pages for 'fragment caching'.

Page 106/125 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • JavaOne Latin America 2012 is a wrap!

    - by arungupta
    Third JavaOne in Latin America (2010, 2011) is now a wrap! Like last year, the event started with a Geek Bike Ride. I could not attend the bike ride because of pre-planned activities but heard lots of good comments about it afterwards. This is a great way to engage with JavaOne attendees in an informal setting. I highly recommend you joining next time! JavaOne Blog provides a a great coverage for the opening keynotes. I talked about all the great set of functionality that is coming in the Java EE 7 Platform. Also shared the details on how Java EE 7 JSRs are willing to take help from the Adopt-a-JSR program. glassfish.org/adoptajsr bridges the gap between JUGs willing to participate and looking for areas on where to help. The different specification leads have identified areas on where they are looking for feedback. So if you are JUG is interested in picking a JSR, I recommend to take a look at glassfish.org/adoptajsr and jump on the bandwagon. The main attraction for the Tuesday evening was the GlassFish Party. The party was packed with Latin American JUG leaders, execs from Oracle, and local community members. Free flowing food and beer/caipirinhas acted as great lubricant for great conversations. Some of them were considering the migration from Spring -> Java EE 6 and replacing their primary app server with GlassFish. Locaweb, a local hosting provider sponsored a round of beer at the party as well. They are planning to come with Java EE hosting next year and GlassFish would be a logical choice for them ;) I heard lots of positive feedback about the party afterwards. Many thanks to Bruno Borges for organizing a great party! Check out some more fun pictures of the party! Next day, I gave a presentation on "The Java EE 7 Platform: Productivity and HTML 5" and the slides are now available: With so much new content coming in the plaform: Java Caching API (JSR 107) Concurrency Utilities for Java EE (JSR 236) Batch Applications for the Java Platform (JSR 352) Java API for JSON (JSR 353) Java API for WebSocket (JSR 356) And JAX-RS 2.0 (JSR 339) and JMS 2.0 (JSR 343) getting major updates, there is definitely lot of excitement that was evident amongst the attendees. The talk was delivered in the biggest hall and had about 200 attendees. Also spent a lot of time talking to folks at the OTN Lounge. The JUG leaders appreciation dinner in the evening had its usual share of fun. Day 3 started with a session on "Building HTML5 WebSocket Apps in Java". The slides are now available: The room was packed with about 150 attendees and there was good interaction in the room as well. A collaborative whiteboard built using WebSocket was very well received. The following tweets made it more worthwhile: A WebSocket speek, by @ArunGupta, was worth every hour lost in transit. #JavaOneBrasil2012, #JavaOneBr @arungupta awesome presentation about WebSockets :) The session was immediately followed by the hands-on lab "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". The lab covers JAX-RS 2.0, Jersey-specific features such as Server-Sent Events, and a WebSocket endpoint using JSR 356. The complete self-paced lab guide can be downloaded from here. The lab was planned for 2 hours but several folks finished the entire exercise in about 75 mins. The wonderfully written lab material and an added incentive of Java EE 6 Pocket Guide did the trick ;-) I also spoke at "The Java Community Process: How You Can Make a Positive Difference". It was really great to see several JUG leaders talking about Adopt-a-JSR program and other activities that attendees can do to participate in the JCP. I shared details about Adopt a Java EE 7 JSR as well. The community keynote in the evening was looking fun but I had to leave in between to go through the peak Sao Paulo traffic time :) Enjoy the complete set of pictures in the album:

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Travelling MVP #1: Visit to SharePoint User Group Finland

    - by DigiMortal
    My first self organized trip this autumn was visit to SharePoint User Group Finland community evening. As active community leaders who make things like these possible they are worth mentioning and on spug.fi side there was Jussi Roine the one who invited me. Here is my short review about my trip to Helsinki. User group meeting As Helsinki is near Tallinn I went there using ship. It was easy to get from sea port to venue and I had also some minutes of time to visit academic book store. Community evening was held on the ground floor of one city center hotel and room was conveniently located near hotel bar and restaurant. Here is the meeting schedule: Welcome (Jussi Roine) OpenText application archiving and governance for SharePoint (Bernd Hennicke, OpenText) Using advanced C# features in SharePoint development (Alexey Sadomov, NED Consulting) Optimizing public-facing internet sites for SharePoint (Gunnar Peipman) After meeting, of course, local dudes doesn’t walk away but continue with some beers and discussion. Sessions After welcome words by Jussi there was session by Bernd Hennicke who spoke about OpenText. His session covered OpenText history and current moment. After this introduction he spoke about OpenText products for SharePoint and gave the audience good overview about where their SharePoint extensions fit in big picture. I usually don’t like those vendors sessions but this one was good. I mean vendor dudes were not aggressively selling something. They were way different – kind people who introduced their stuff and later answered questions. They acted like good guests. Second speaker was Alexey Sadomov who is working on SharePoint development projects. He introduced some ways how to get over some limitations of SharePoint. I don’t go here deeply with his session but it’s worth to mention that this session was strong one. It is not rear case when developers have to make nasty hacks to SharePoint. I mean really nasty hacks. Often these hacks are long blocks of code that uses terrible techniques to achieve the result. Alexey introduced some very much civilized ways about how to apply hacks. Alex Sadomov, SharePoint MVP, speaking about SharePoint coding tips and tricks on C# I spoke about how I optimized caching of Estonian Microsoft community portal that runs on SharePoint Server and that uses publishing infrastructure. I made no actual demos on SharePoint because I wanted to focus on optimizing process and share some experiences about how to get caches optimized and how to measure caches. Networking After official part there was time to talk and discuss with people. Finns are cool – they have beers and they are glad. It was not big community event but people were like one good family. Developers there work often for big companies and it was very interesting to me to hear about their experiences with SharePoint. One thing was a little bit surprising for me – SharePoint guys in Finland are talking actively also about Office 365 and online SharePoint. It doesn’t happen often here in Estonia. I had to leave a little bit 21:00 to get to my ship back to Tallinn. I am sure spug.fi dudes continued nice evening and they had at least same good time as I did. Do I want to go back to Finland and meet these guys again? Yes, sure, let’s do it again! :)

    Read the article

  • C# Proposal: Compile Time Static Checking Of Dynamic Objects

    - by Paulo Morgado
    C# 4.0 introduces a new type: dynamic. dynamic is a static type that bypasses static type checking. This new type comes in very handy to work with: The new languages from the dynamic language runtime. HTML Document Object Model (DOM). COM objects. Duck typing … Because static type checking is bypassed, this: dynamic dynamicValue = GetValue(); dynamicValue.Method(); is equivalent to this: object objectValue = GetValue(); objectValue .GetType() .InvokeMember( "Method", BindingFlags.InvokeMethod, null, objectValue, null); Apart from caching the call site behind the scenes and some dynamic resolution, dynamic only looks better. Any typing error will only be caught at run time. In fact, if I’m writing the code, I know the contract of what I’m calling. Wouldn’t it be nice to have the compiler do some static type checking on the interactions with these dynamic objects? Imagine that the dynamic object that I’m retrieving from the GetValue method, besides the parameterless method Method also has a string read-only Property property. This means that, from the point of view of the code I’m writing, the contract that the dynamic object returned by GetValue implements is: string Property { get; } void Method(); Since it’s a well defined contract, I could write an interface to represent it: interface IValue { string Property { get; } void Method(); } If dynamic allowed to specify the contract in the form of dynamic(contract), I could write this: dynamic(IValue) dynamicValue = GetValue(); dynamicValue.Method(); This doesn’t mean that the value returned by GetValue has to implement the IValue interface. It just enables the compiler to verify that dynamicValue.Method() is a valid use of dynamicValue and dynamicValue.OtherMethod() isn’t. If the IValue interface already existed for any other reason, this would be fine. But having a type added to an assembly just for compile time usage doesn’t seem right. So, dynamic could be another type construct. Something like this: dynamic DValue { string Property { get; } void Method(); } The code could now be written like this; DValue dynamicValue = GetValue(); dynamicValue.Method(); The compiler would never generate any IL or metadata for this new type construct. It would only thee used for compile type static checking of dynamic objects. As a consequence, it makes no sense to have public accessibility, so it would not be allowed. Once again, if the IValue interface (or any other type definition) already exists, it can be used in the dynamic type definition: dynamic DValue : IValue, IEnumerable, SomeClass { string Property { get; } void Method(); } Another added benefit would be IntelliSense. I’ve been getting mixed reactions to this proposal. What do you think? Would this be useful?

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • HTML Presence Controls for Communications Server 14 CodePlex Project

    Showing Presence on the Web If youre running Office Communicator Server 2007 R2, you know that your only out-of-the-box option for showing presence on the web is to use the NameControl ActiveX control that ships as part of Office.  Being an ActiveX control, this obviously means that youre limited to Internet Explorer.  Also, nobody likes ActiveX controls What if you want to show the presence of users in a pure ASP.NET or HTML application and cant assume that the user has Communicator installed you need anASP.NET or HTML presence control.  HTML Presence Controls for Microsoft Communications Server 14 We recently worked with the UC team at Microsoft on a keynote demo for TechEd 2010 in New Orleans.  The demo was for a fictitious airline Fabrikam Airlines that wanted to show the presence of customer service and reservations agents on its website.  Customers could also start an instant message conversation with the agents using a Silverlight web chat window that used WCF to communicate with the backend UCMA application. We built HTML Presence Controls that use AJAX to poll a REST-based WCF service running in IIS and hosting a UCMA 3.0 presence subscription application.   Microsoft has graciously allowed us to publish these on CodePlex so that the development community can benefit from them:  http://htmlpresencecontrols.codeplex.com/ We will be maintaining the CodePlex project as new builds of UCMA 3.0 become available.  Check out the project home page on CodePlex for some more in-depth details on how the controls are implemented. ASP.NET Server Control Implementation Were providing an ASP.NET Server Control implementation that you can use stand-alone or in a GridView or Repeater (or other layout control).  The control has properties that allow you to control its appearance, e.g. you can choose whether or not to show the contacts name or availability text. You can also use the server control in a layout control such as a GridView by putting it in a TemplateColumn and binding to the Sip Uri in the data source. Disclaimer Once we started working on these, we realized why Microsoft hasnt shipped such controls as part of the product.  There are some tradeoffs you have to be aware of when using these controls, heres the high level. Privacy The backend UCMA 3.0 application that subscribes to presence of contacts runs as a trusted application and can thus retrieve the presence of any user in the organization.  Theres currently no good way in UCMA to apply any privacy rules to ensure that the consumer of the presence controls has permission to see the presence of the contacts that the controls are bound to.  Just to be absolutely crystal clear These controls provide a way to query the presence of any user in the organization, regardless of the privacy relationship between the person consuming the controls and the contacts whose presence is being displayed. Were exploring options for a design pattern that would allow you to inject some privacy controls.  Keep in mind though that you would most likely be responsible for implementing this logic, as there is currently no functionality in UCMA that allows you to do that. Polling the WCF REST Service The controls poll the backend WCF service to retrieve the presence of contacts - you can control the refresh interval so that they poll less often. We implemented a caching layer so that the WCF service is always communicating with a presence cache it never communicates directly with Communications Server.  For example, if your web page is showing the presence of sip:[email protected] and 500 people have the page open, the presence cache only contains one instance of the subscription Communications Server is not being polled 500 times for the presence of that contact. Once the presence of a contact changes, it is updated in the cache.  There are some server-based push mechanisms that would work nicely here, such as the one that Outlook Web Access 2010 uses.  Unfortunately we didnt have time to explore these options. Community Contribution Take a look at the project Issue Tracker, there are a couple of things we can use some help with.  Shoot me a note if youre interested in contributing to the project. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dude, what’s up with POP Forums vNext?

    - by Jeff
    Yeah, it has been awhile. I posted v9.2 back in January, about five months ago. That’s a real change from the release pace I had there for awhile. Let me explain what’s going on. First off, in the interim, I re-launched CoasterBuzz, which required a lot of my attention for about two of those months. That’s a good thing though, because that site is just about the best test bed I could ask for. The other thing is that I committed to make the next version use ASP.NET MVC 4, which is now at the RC stage. I didn’t think much about when they’d hit their RTW point, but RC is good enough for me. To that end, there is enough change in the next version that I recently decided to make it a major version upgrade, and finish up the loose ends and science projects to make it whole. Here’s what’s in store… Mobile views: I sat on this or a long time. Originally, I was going to use jQuery Mobile, and waited and waited for a new release, but in the end, decided against using it. Sometimes buttons would unexplainably not work, I felt like I was fighting it at times, and the CSS just felt too heavy. I rolled my own mobile sugar at a fraction of the size, and I think you’ll find it easy to modify. And it’s Metro-y, of course! Re-do of background services: A number of things run in the background, and I did quite a bit of “reimagining” of that code. It’s the weirdness of running services in a Web site context, because so many folks can’t run a bona fide service on their host’s box. The biggest change here is that these service no longer start up by default. You’ll need to call a new method from global.asax called PopForumsActivation.StartServices(). This is also a precursor to running the app in a Web farm (new data layer and caching is the second part of that). I learned about this the hard way when I had three apps using the forum library code but only one was actually the forum. The services were all running three times as often with race conditions and hits on the same data. That was particularly bad for e-mail. CSS clean up: It’s still not ideal, but it’s getting better. That’s one of those things that comes with integrating to a real site… you discover all of the dumb things you did. The mobile CSS is particularly easier to live with. Bug fixes: There are a whole lot of them. Most were minor, but it’s feeling pretty solid now. So that’s where I am. I’m going to call it v10.0, and I’m going to really put forth some effort toward finishing the mobile experience and getting through the remaining bugs. The roadmap beyond that will likely not be feature oriented, but rather work on some other things, like making it run in Azure, perhaps using SQL CE, a better install experience, etc. As usual, I’ll post the latest here. Stay tuned!

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Url rewrite subfolder to root and forbid accessing subfolder

    - by Alessandro Pezzato
    I have drupal installed in a subfolder drupal, but I want to access pages as it is in root folder: http://www.example.com instead of http://www.example.com/drupal I'm able to have this working, but it's also working with url containing subfolder, so I have http://www.example.com and a clone site in http://www.example.com/drupal What is the rule to forbid access to subfolder? I want all url starting with http://www.example.com/drupal being forbidden. This is .htaccess in / directory: Options -Indexes Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteRule ^(.*+)$ drupal/$1 [L,QSA] </IfModule> And this is drupal .htaccess in /drupal/ directory: Options -Indexes Options +FollowSymLinks ErrorDocument 404 index.php DirectoryIndex index.php index.html index.htm # Override PHP settings that cannot be changed at runtime. See # sites/default/default.settings.php and drupal_initialize_variables() in # includes/bootstrap.inc for settings that can be changed at runtime. # PHP 5, Apache 1 and 2. <IfModule mod_php5.c> php_flag magic_quotes_gpc off php_flag magic_quotes_sybase off php_flag register_globals off php_flag session.auto_start off php_value mbstring.http_input pass php_value mbstring.http_output pass php_flag mbstring.encoding_translation off </IfModule> # Requires mod_expires to be enabled. <IfModule mod_expires.c> # Enable expirations. ExpiresActive On # Cache all files for 2 weeks after access (A). ExpiresDefault A1209600 <FilesMatch \.php$> # Do not allow PHP scripts to be cached unless they explicitly send cache # headers themselves. Otherwise all scripts would have to overwrite the # headers set by mod_expires if they want another caching behavior. This may # fail if an error occurs early in the bootstrap process, and it may cause # problems if a non-Drupal PHP file is installed in a subdirectory. ExpiresActive Off </FilesMatch> </IfModule> # Various rewrite rules. <IfModule mod_rewrite.c> RewriteEngine on # Block access to "hidden" directories whose names begin with a period. This # includes directories used by version control systems such as Subversion or # Git to store control files. Files whose names begin with a period, as well # as the control files used by CVS, are protected by the FilesMatch directive # above. RewriteRule "(^|/)\." - [F] # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # uncomment the following: # RewriteCond %{HTTP_HOST} !^www\. [NC] # RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment the following: RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteBase /drupal # Pass all requests not referring directly to files in the filesystem to # index.php. Clean URLs are handled in drupal_environment_initialize(). RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico #RewriteRule ^ index.php [L] RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch> </IfModule> </IfModule>

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Get Pop Up Notifications for Your RSS Feeds with Feed Notifier

    - by DigitalGeekery
    Are you looking for a way to get updates from your favorite websites right to your desktop?  If so, you’ll want to check out Feed Notifier. This free Windows application runs in the system tray and delivers pop-up notifications to your desktop when your subscribed RSS feeds are updated. Download and install Feed Notifier. (Download link below) When you are finished installing, the Feed Notifier Preferences window will open. Click on the Add… button to add an RSS feed. Copy and paste the Feed URL into the text box and click Next. Choose your polling interval. This is how often your feed will be checked for new items. You can set your polling interval for days, hours, minutes, or even seconds. Click FInish. At your configured interval, Feed Notifier will check your feeds for new items. If new items are present, they will pop up above your system tray.  You’ll get an intro portion of the article. Simply Click the headline in the feed pop up… …to open the full article in your default browser. Setting Preferences Open the preferences of Feed Notifier, by going to Start > All Programs > Feed Notifier, or right clicking on the system tray icon and selecting Preferences. On the Pop-ups tab you can configure the duration in seconds that each article stays displayed on your screen. The default is five seconds. You can also change the size of the display, the theme, and the amount of content displayed.   The Options tab offers additional configurations like article caching and using a proxy server. Filter tab allows you to filter in or out certain content. To add a filter click Add…   … then type in the filter rule. You can even choose to apply it to only certain feeds. Click OK. Feed Notifier will display on the filters tab the number of times the filter is applied. Click OK when finished.   You can scroll though the articles by using the forward and back buttons at the lower left, or use the play / pause buttons to move though the articles in a slideshow-type fashion.   Feed Notifier is nice way to get your updated feeds directly to your desktop in a timely fashion. It’s supports all RSS and Atom feeds and features a clean look and feel with plenty of customizable options. Download Feed Notifier Similar Articles Productive Geek Tips Make Outlook Stop Using Internet Explorer’s RSS FeedsChange Default Feed Reader in FirefoxView Feedburner Subscriber Numbers Even if FeedCount is Not DisplayedSubscribe to RSS Feeds in Chrome with a Single ClickOrganize your RSS Feeds with FeedDemon TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Stir Trek 2: Iron Man Edition

    Next month (7 May 2010) Ill be presenting at the second annual Stir Trek event in Columbus, Ohio. Stir Trek (so named because last year its themes mixed MIX and the opening of the Star Trek movie) is a very cool local event.  Its a lot of fun to present at and to attend, because of its unique venue: a movie theater.  And whats more, the cost of admission includes a private showing of a new movie (this year: Iron Man 2).  The sessions cover a variety of topics (not just Microsoft), similar to CodeMash.  The event recently sold out, so Im not telling you all of this so that you can go and sign up (though I believe you can get on the waitlist still).  Rather, this is pretty much just an excuse for me to talk about my session as a way to organize my thoughts. Im actually speaking on the same topic as I did last year, but the key difference is that last year the subject of my session was nowhere close to being released, and this year, its RTM (as of last week).  Thats right, the topic is Whats New in ASP.NET 4 how did you guess? Whats New in ASP.NET 4 So, just what *is* new in ASP.NET 4?  Hasnt Microsoft been spending all of their time on Silverlight and MVC the last few years?  Well, actually, no.  There are some pretty cool things that are now available out of the box in ASP.NET 4.  Theres a nice summary of the new features on MSDN.  Here is my super-brief summary: Extensible Output Caching use providers like distributed cache or file system cache Preload Web Applications IIS 7.5 only; avoid the startup tax for your site by preloading it. Permanent (301) Redirects are finally supported by the framework in one line of code, not two. Session State Compression Can speed up session access in a web farm environment.  Test it to see. Web Forms Features several of which mirror ASP.NET MVC advantages (viewstate, control ids) Set Meta Keywords and Description easily Granular and inheritable control over ViewState Support for more recent browsers and devices Routing (introduced in 3.5 SP1) some new features and zero web.config changes required Client ID control makes client manipulation of DOM elements much simpler. Row Selection in Data Controls fixed (id based, not row index based) FormView and ListView enhancements (less markup, more CSS compliant) New QueryExtender control makes filtering data from other Data Source Controls easy More CSS and Accessibility support Reduction of Tables and more control over output for other template controls Dynamic Data enhancements More control templates Support for inheritance in the Data Model New Attributes ASP.NET Chart Control (learn more) Lots of IDE enhancements Web Deploy tool My session will cover many but not all of these features.  Theres only an hour (3pm-4pm), and its right before the prize giveaway and movie showing, so Ill be moving quickly and most likely answering questions off-line via email after the talk. Hope to see you there! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Slick2D Rendering Lots of Polygons

    - by Hazzard
    I'm writing an little isometric game using Slick. The world terrain is made up of lots of quadrilaterals. In a small world that is 128 by 128 squares, over 16,000 quadrilaterals need to be rendered. This puts my pretty powerful computer down to 30 fps. I've though about caching "chunks" of the world so only single chunks would ever need updating at a time, but I don't know how to do this, and I am sure there are other ways to optimize it besides that. Maybe I'm doing the whole thing wrong, surely fancy 3D games that run fine on my machine are more intensive than this. My question is how can I improve the FPS and am I doing something wrong? Or does it actually take that much power to render those polygons? -- Here is the source code for the render method in my game state. It iterates through a 2d array or heights and draws polygons based on the height. public void render(GameContainer container, StateBasedGame game, Graphics gfx) throws SlickException { gfx.translate(offsetX * d + container.getWidth() / 2, offsetY * d + container.getHeight() / 2); gfx.scale(d, d); for (int y = 0; y < placeholder.length; y++) {// x & y are isometric // diag for (int x = 0; x < placeholder[0].length; x++) { Polygon poly; int hor = TestState.TILE_WIDTH * (x - y);// hor and ver are orthagonal int W = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x];//points to go off of int S = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y + 1][x + 1]; int E = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x + 1]; int N = TestState.TILE_HEIGHT * (x + y) - 1 * heights[y][x]; if (placeholder[y][x] == null) { poly = new Polygon();//Create actual surface polygon poly.addPoint(-TestState.TILE_WIDTH + hor, W); poly.addPoint(hor, S + TestState.TILE_HEIGHT); poly.addPoint(TestState.TILE_WIDTH + hor, E); poly.addPoint(hor, N - TestState.TILE_HEIGHT); float z = ((float) heights[y][x + 1] - heights[y + 1][x]) / 32 + 0.5f; placeholder[y][x] = new Tile(poly, new Color(z, z, z)); //ShapeRenderer.fill(placeholder[y][x]); } if (true) {//ONLY draw tile if it's on screen gfx.setColor(placeholder[y][x].getColor()); ShapeRenderer.fill(placeholder[y][x]); //gfx.fill(placeholder[y][x]); //placeholder[y][x]. //DRAW EDGES if (y + 1 == placeholder.length) {//draw South foundation edges gfx.setColor(Color.gray); Polygon found = new Polygon(); found.addPoint(-TestState.TILE_WIDTH + hor, W); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(-TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); } if (x + 1 == placeholder[0].length) {//north gfx.setColor(Color.darkGray); Polygon found = new Polygon(); found.addPoint(TestState.TILE_WIDTH + hor, E); found.addPoint(hor, S + TestState.TILE_HEIGHT); found.addPoint(hor, TestState.TILE_HEIGHT * (x + y + 1)); found.addPoint(TestState.TILE_WIDTH + hor, TestState.TILE_HEIGHT * (x + y)); gfx.fill(found); }//*/ } } } }

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Linux bcm43224 wifi adapter slows down a couple minutes after boot

    - by Blubber
    I just installed Ubuntu on my mid 2012 MacBook Air. Everything worked out of the box, but the wifi is showing some weird behavior. When I first login it's really fast, loading google.com is near instant, and browsing in general feels at least as smooth as it did on Mac OS. However, after a couple minutes the connection slows down dramatically, sometimes it takes over 5s to load google.com, a simple reboot fixes the problem for another couple minutes. Specs: Wifi: 02:00.0 Network controller: Broadcom Corporation BCM43224 802.11a/b/g/n (rev 01) Driver: open-source brcmsmac driver Kernel: Linux wega 3.8.0-21-generic #32-Ubuntu SMP Tue May 14 22:16:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Distro: Ubuntu 13.04 (uptodate) I tried a number of things, none of which actually helped Use proprietary sta driver from broadcom Installed firmware into /lib/firmware/brcms (which, as far as I can tell from logs, does not get loaded at all) Switch router to only use 2.4 OR 5 GHz Set router to only use a OR g OR n Set router to use AES encryption only Turned off power management on the adapter Set regulatory region to the correct value (NL) on both router and laptop Disable ipv6 Nothing seems to help, the slowdown always occurs. I did notice that the latency (ping google.com) stays roughly the same (around 9ms). Below is some more information that might be of use. $ lspci -nnk | grep -iA2 net 02:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01) Subsystem: Apple Inc. Device [106b:00e9] Kernel driver in use: bcma-pci-bridge $ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no $ lsmod Module Size Used by dm_crypt 22820 1 arc4 12615 2 brcmsmac 550698 0 coretemp 13355 0 kvm_intel 132891 0 parport_pc 28152 0 kvm 443165 1 kvm_intel ppdev 17073 0 cordic 12574 1 brcmsmac brcmutil 14755 1 brcmsmac mac80211 606457 1 brcmsmac cfg80211 510937 2 brcmsmac,mac80211 bnep 18036 2 rfcomm 42641 12 joydev 17377 0 applesmc 19353 0 input_polldev 13896 1 applesmc snd_hda_codec_hdmi 36913 1 microcode 22881 0 snd_hda_codec_cirrus 23829 1 nls_iso8859_1 12713 1 uvcvideo 80847 0 btusb 22474 0 snd_hda_intel 39619 3 videobuf2_vmalloc 13056 1 uvcvideo snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_cirrus bcm5974 17347 0 bluetooth 228619 22 bnep,btusb,rfcomm snd_hwdep 13602 1 snd_hda_codec lpc_ich 17061 0 videobuf2_memops 13202 1 videobuf2_vmalloc videobuf2_core 40513 1 uvcvideo videodev 129260 2 uvcvideo,videobuf2_core bcma 41051 1 brcmsmac snd_pcm 97451 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 16 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_hda_codec_cirrus mei 41158 0 soundcore 12680 1 snd apple_bl 13673 0 mac_hid 13205 0 lp 17759 0 parport 46345 3 lp,ppdev,parport_pc usb_storage 57204 0 hid_apple 13237 0 hid_generic 12540 0 ghash_clmulni_intel 13259 0 aesni_intel 55399 399 aes_x86_64 17255 1 aesni_intel xts 12885 1 aesni_intel lrw 13257 1 aesni_intel gf128mul 14951 2 lrw,xts ablk_helper 13597 1 aesni_intel cryptd 20373 4 ghash_clmulni_intel,aesni_intel,ablk_helper i915 600351 3 ahci 25731 3 libahci 31364 1 ahci video 19390 1 i915 i2c_algo_bit 13413 1 i915 drm_kms_helper 49394 1 i915 usbhid 47074 0 drm 286313 4 i915,drm_kms_helper hid 101002 3 hid_generic,usbhid,hid_apple $ dmesg | egrep 'b43|bcma|brcm|[F]irm' [ 0.055025] [Firmware Bug]: ioapic 2 has no mapping iommu, interrupt remapping will be disabled [ 0.152336] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 2.187681] pci_root PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-99] only partially covers this bridge [ 12.553600] bcma-pci-bridge 0000:02:00.0: enabling device (0000 -> 0002) [ 12.553657] bcma: bus0: Found chip with id 0xA8D8, rev 0x01 and package 0x08 [ 12.553688] bcma: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x22, class 0x0) [ 12.553715] bcma: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x17, class 0x0) [ 12.553764] bcma: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x0F, class 0x0) [ 12.605777] bcma: bus0: Bus registered [ 12.852925] brcmsmac bcma0:0: mfg 4bf core 812 rev 23 class 0 irq 17 [ 13.085176] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: false (implement) [ 13.085186] brcmsmac bcma0:0: brcms_ops_config: change power-save mode: false (implement) [ 20.862617] brcmsmac bcma0:0: brcmsmac: brcms_ops_bss_info_changed: associated [ 20.862622] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 0 (implement) [ 20.862625] brcmsmac bcma0:0: brcms_ops_bss_info_changed: qos enabled: true (implement) [ 20.897957] brcmsmac bcma0:0: brcms_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) $ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"wlan" Mode:Managed Frequency:5.22 GHz Access Point: E0:46:9A:4E:63:9A Bit Rate=65 Mb/s Tx-Power=17 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=63/70 Signal level=-47 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:13 Invalid misc:56 Missed beacon:0

    Read the article

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • c# WinForms ReportViewer Performance issue using RefreshReport() and ServerReport.SetParameters()

    - by mdk
    Hi All, Currently I am writing a c# client application that uses the WinForms ReportViewer Control to display reports from a remote server. I am having performance troubles with the ReportViewer Control, to be specific with the 2 methods reportViewer.ServerReport.SetParameters() and reportViewer.RefreshReport() – they both take a really long time to complete and not just on the very first call, but on each subsequent call as well. SetParameters() takes 20 to 40 seconds (they vary greatly in time, some execute event okay fast) and RefreshReport() is a bit faster but still takes ages. I don’t think the server is the culprit, as the same report viewed using the browser renders pretty fast, about a second tops. The report in question doesn't matter as well. When I break into the process and take a look at the call stack, I see a call to Socket.DoConnect. So I thought that’s a good reason to start using fiddler and I installed it, disabled caching and fired up the app again to see which call takes that long to connect, but the performance issue was gone. By using a proxy I am having the same performance as the webbrowser. FYI: I am using NTLM authentication in the following way: reportViewer.ServerReport.ReportServerCredentials.NetworkCredentials = new NetworkCredentials() { Username = ... } I don’t have a strong webbackground, so I guess my question is: What should this tell me / What should I be looking into? (Btw: Adding fiddler to my installation package is not the solution I am looking for :)) I am grateful for any pointers. Take care, -Martin

    Read the article

  • SPException: Catastrophic failure (Exception from HRESULT: 0x8000FFF (E_UNEXPECTED) in Sharepoint

    - by BeraCim
    I've been trying to programmatically copy custom content type and its custom columns from one web to another for some time now, and I always get different errors or exceptions every time. After yet more tries, I received more strange and cryptic exception from Sharepoint after clicking onto a newly copied custom column in a custom content type. I checked the logs, and this is what I got: Unknown SPRequest erorr occurred. More information: 0x80070002 Unable to locate the xml-definition for FieldName with FieldId 'guid without braces', exception: Microsoft.SharePoint.SPException: Catastrophic failure (Exception from HRESULT: 0x8000FFF (E_UNEXPECTED)) ---> System.Runtime.InteropServices.COMException... ... at Microsoft.SharePoint.Library.SPRequestInternalClass.GetGlobalContentTypeXml(String bstrUrl, Int32 type, UInt 32 lcid, Object varIdBytes... Failed to find the content type schema for ct-1033-0x1000blahblahblahcontenttypeId while caching feature data. Unknown SPRequest error occurred. More informationL 0x8000ffff Unable to locate the xml-definition for CType with SPContentTypeId '0x0100MorecontenttypeId', exception: Microsoft.SharePoint.SPException: Catastrophic failure(Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED)) ---> System.Runtime.InteropServices.COMException (0x8000FFFF): Catastrophic failure... ... at Microsoft.SharePoint.Library.SPRequestInternalClass.GetGlobalContentTypeXml(String bstrUrl, Int32 type, UInt 32 lcid, Object varIdBytes... It failed to find quite a few content type schema. I'm confused with what Sharepoint is trying to do here, and why a simple process of copying a custom content type from one web to another just wouldn't work in contrast to the information found on the web e.g. this. Appreciate any help to get over this problem. Thanks.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >