Search Results

Search found 3558 results on 143 pages for 'hosted'.

Page 116/143 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

  • Planning in the Cloud - For Real

    - by jmorourke
    One of the hottest topics at Oracle OpenWorld 2012 this week is “the cloud”.  Over the past few years, Oracle has made major investments in cloud-based applications, including some acquisitions, and now has over 100 applications available through Oracle Cloud services.  At OpenWorld this week, Oracle announced seven new offerings delivered via the Oracle Cloud services platform, one of which is the Oracle Planning and Budgeting Cloud Service.  Based on Oracle Hyperion Planning, this service is the first of Oracle’s EPM applications to be to be offered in the Cloud.    This solution is targeted to organizations that are struggling with spreadsheets or legacy planning and budgeting applications, want to deploy a world class solution for financial planning and budgeting, but are constrained by IT resources and capital budgets. With the Oracle Planning and Budgeting Cloud Service, organizations can fast track their way to world-class financial planning, budgeting and forecasting – at cloud speed, with no IT infrastructure investments and with minimal IT resources. Oracle Hyperion Planning is a market-leading budgeting, planning and forecasting application that is used by over 3,300 organizations worldwide.  Prior to this announcement, Oracle Hyperion Planning was only offered on a license and maintenance basis.  It could be deployed on-premise, or hosted through Oracle On-Demand or third party hosting partners.  With this announcement, Oracle’s market-leading Hyperion Planning application will be available as a Cloud Service and through subscription-based pricing. This lowers the cost of entry and deployment for new customers and provides a scalable environment to support future growth. With this announcement, Oracle is the first major vendor to offer one of its core EPM applications as a cloud-based service.  Other major vendors have recently announced cloud-based EPM solutions, but these are only BI dashboards delivered via a cloud platform.   With this announcement Oracle is providing a market-leading, world-class financial budgeting, planning and forecasting as a cloud service, with the following advantages: ·                     Subscription-based pricing ·                     Available standalone or as an extension to Oracle Fusion Financials Cloud Service ·                     Implementation services available from Oracle and the Oracle Partner Network ·                     High scalability and performance ·                     Integrated financial reporting and MS Office interface ·                     Seamless integration with Oracle and non-Oracle transactional applications ·                     Provides customers with more options for their planning and budgeting deployment vs. strictly on-premise or cloud-only solution providers. The OpenWorld announcement of Oracle Planning and Budgeting Cloud Service is a preview announcement, with controlled availability expected in calendar year 2012.  For more information, check out the links below: Press Release Web site If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Get Fanatical About Your Followers

    - by Mike Stiles
    In the fourth of our series of discussions with Aberdeen’s Trip Kucera, we touch on what fans of your brand have come to expect in exchange for their fandom. Spotlight: Around the Oracle Social office, we live for football. So when we think of a true “fan” of a brand, something on the level of a football fan is what comes to mind. But are brands trying to invest fans on that same level? Trip: Yeah, if you’re a football fan, this is definitely your time of year. And if you’ve been to any NFL games recently, especially if you hadn’t been for a few years previously, you may have noticed that from the cup holders to in-stadium Wi-Fi, there’s an increasing emphasis being placed on “fan-focused” accommodations. That’s what they’re known as in the stadium business. Spotlight: How are brands doing in that fan-focused arena? Trip: Remember fan is short for “fanatical.” Brands can definitely learn from the way teams have become fanatical about their fans, or in the social media world, their followers. Many companies consider a segment of their addressable social audience as true fans; I’ve even heard the term “super-fans” used. So just as fans know and can tell you nearly everything about their favorite team, our research shows that there’s a lot value from getting to know your social audience—your followers—at a deeper level. Spotlight: So did your research show there’s a lot to be gained by making fandom a two-way street? Trip: Aberdeen’s new social relationship management research suggests that companies should develop capabilities to better analyze their social audience at a more granular level. Countless “ripped from the headlines” examples, from “United Breaks Guitars” to the most recent British Airways social fiasco we talked about a few weeks ago show how social can magnify the impact of a single customer voice. Spotlight: So how do the companies who are executing social most successfully do that? Trip: Leaders, which are the top-performing companies in Aberdeen’s study, are showing the value of identifying and categorizing your social audience. You should certainly treat every customer as if they have 10,000 followers, because they just might, but you can also proactively engage with high-value customer and high-value influencers. Getting back to the football analogy, it’s like how teams strive to give every guest a great experience, but they really roll out the red carpet for those season ticket and luxury box holders. Spotlight: I’m not allowed in luxury boxes, so you’ll have to tell me what that’s like. But what is the brand equivalent of rolling out the red carpet? Trip: Leaders are nearly three times more likely than Followers to have a process in place that identifies key social influencers for engagement, and more than twice as likely to identify customer advocates for social outreach. This is the kind of knowledge that gives companies the ability to better target social messaging and promotions like we talked about in our last discussion, as well as a basis for understanding how to measure the impact of their social media programs. I’ll give you an example. I hosted an event at one of my favorite restaurants recently. I had mentioned them in a Tweet several weeks before the event, and on the day of the event, they Tweeted out that they were looking forward to seeing me that evening for the event. It’s a small thing, but it had a big impact and I’d certainly go back as a result. Spotlight: So what specifically can brands use and look at to determine where their potential super-fans are? Trip: Social graph analysis, which looks at both the demographic/psychographic trends as well as the behavioral connections, can surface important brand value. Aberdeen’s PR and Brand Management research indicated that top-performing companies are more than three times more likely than Followers to both determine demographic trends through social listening (44% vs. 13%), and to identify meaningful customer segments through social (44% vs. 12%). This kind of brand-level insight can complement and enrich traditional market research. But perhaps even more importantly, it can serve as an early warning system for customer experience failures. @mikestilesPhoto: freedigitalphotos.net

    Read the article

  • Welcome To The Nashorn Blog

    - by jlaskey
    Welcome to all.  Time to break the ice and instantiate The Nashorn Blog.  I hope to contribute routinely, but we are very busy, at this point, preparing for the next development milestone and, of course, getting ready for open source. So, if there are long gaps between postings please forgive. We're just coming back from JavaOne and are stoked by the positive response to all the Nashorn sessions. It was great for the team to have the front and centre slide from Georges Saab early in the keynote. It seems we have support coming from all directions. Most of the session videos are posted. Check out the links. Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM. Unfortunately, Marcus - the code generation juggernaut,  got saddled with the first session of the first day. Still, he had a decent turnout. The talk focused on issues relating to optimizations we did to get good performance from the JVM. Much yet to be done but looking good. Nashorn: JavaScript on the JVM. This was the main talk about Nashorn. I delivered the little bit of this and a little bit of that session with an overview, a follow up on the open source announcement, a run through a few of the Nashorn features and some demos. The room was SRO, about 250±. High points: Sam Pullara, from Twitter, came forward to describe how painless it was to get Mustache.js up and running (20x over Rhino), and,  John Ceccarelli, from NetBeans came forward to describe how Nashorn has become an integral part of Netbeans. A healthy Q & A at the end was very encouraging. Meet the Nashorn JavaScript Team. Michel, Attila, Marcus and myself hosted a Q & A. There was only a handful of people in the room (we assume it was because of a conflicting session ;-) .) Most of the questions centred around Node.jar, which leads me to believe, Nashorn + Node.jar is what has the most interest. Akhil, Mr. Node.jar, sitting in the audience, fielded the Node.jar questions. Nashorn, Node, and Java Persistence. Doug Clarke, Akhil and myself, discussed the title topics, followed by a lengthy Q & A (security had to hustle us out.) 80 or so in the room. Lots of questions about Node.jar. It was great to see Doug's use of Nashorn + JPA. Nashorn in action, with such elegance and grace. Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings. Attila discussed how he applied Dynalink to Nashorn. Good turn out for this session as well. I have a feeling that once people discover and embrace this hidden gem, great things will happen for all languages running on the JVM. Finally, there were quite a few JavaOne sessions that focused on non-Java languages and their impact on the JVM. I've always believed that one's tool belt should carry a variety of programming languages, not just for domain/task applicability, but also to enhance your thinking and approaches to problem solving. For the most part, future blog entries will focus on 'how to' in Nashorn, but if you have any suggestions for topics you want discussed, please drop a line.  Cheers. 

    Read the article

  • broken upgrade from 10.04 to 12.04 on a VPS - recoverable?

    - by HorusKol
    I have a VPS hosted 1500 km away. It originally came with 9.10 - and this morning I decided that I really should get to an LTS release, and figured I'd jump to 12.04. Researching, I discovered that there is no direct path between 9.10 and 12.04, but that I could upgrade via 10.04. After backing up my data, I dove in. The upgrade to 10.04 was successful, and I proceeded to upgrade to 12.04. Things started to go wrong. First, I got an error with GLIBC - I retried and got the same error. That's when I stopped the upgrade. I then tried another round of apt-get update && apt-get upgrade and got a list of "unmet dependencies": apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed I tried to see if I could do something about these - using apt-get -f install. This told me that I would need to upgrade my kernel. I found instructions on how to do this, but when I ran apt-get to install the new linux headers, I got the same dependency errors. I found another answer here where someone else had had an interruption in their upgrade - and tried the solution that worked for them: sudo apt-get -f dist-upgrade This resulted in the error: E: Could not perform immediate configuration on 'python2.7-minimal'.Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) I tried to resolve this by: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal But this simply ended up with this last list of dependency errors: apt: Depends: ubuntu-keyring but it is not going to be installed Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed PreDepends: dpkg (>= 1.15.7.2) but 1.15.5.6ubuntu4.6 is to be installed apt-utils: Depends: libapt-pkg-libc6.10-6-4.8 libapt-inst1.4: Depends: libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed libapt-pkg4.12: Depends: libc6 (>= 2.15) but 2.11.1-0ubuntu7.11 is to be installed Depends: libstdc++6 (>= 4.6) but 4.4.3-4ubuntu5.1 is to be installed libc6: Depends: libc-bin (= 2.11.1-0ubuntu7.11) but 2.15-0ubuntu10.2 is to be installed libept0: Depends: libapt-pkg-libc6.10-6-4.8 libnih-dbus1: Depends: libnih1 (= 1.0.3-4ubuntu9) but 1.0.1-1 is to be installed python: Depends: python-minimal (= 2.6.5-0ubuntu1) but 2.7.3-0ubuntu2 is to be installed python-apt: Depends: libapt-pkg-libc6.10-6-4.8 python-minimal: Depends: python2.7-minimal (>= 2.7.3) but it is not going to be installed Breaks: python-support (< 1.0.10ubuntu2) but 1.0.4ubuntu1 is to be installed synaptic: Depends: libapt-pkg-libc6.10-6-4.8 Any ideas on how to dig out of this hole?

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • Free hosting solution for a very low-traffic website [duplicate]

    - by user966939
    This question already has an answer here: How to find web hosting that meets my requirements? 4 answers I run a very low-traffic website (about 40 users, basically all of which are daily active on the site). I don't see it changing anytime soon either, as there is no way to sign up on the site right now. Until now I have just been using a sub-directory on a friend's host (shared), to host the web site. But in only a few weeks from now, his subscription will end, and he has no plans on renewing it. So of course this means I'll have to move on to something else. But I don't think I'll find someone who'd be willing to share a... shared host with me again. And besides, the software used on that server is ancient (PHP 4.4.9 + MySQL 4.1.22). There's one obvious solution that comes to mind, I guess: choose a better host and pay for it myself. The problem here is that I have no real fixed income, as I'm only a student. So even if the pricing is dirt cheap, I just can't be certain I will be able to afford it, every single month, for... at least 2 years maybe? So I've looked at free hosting solutions instead. The least requirement I had was that it was completely free of ads. But no matter where I look, I always find something in a corner or two ("what can you expect from a free host?" - yeah I know, but I guess it was worth a shot). For example, on Byethost (one of the free hosts I tried), if you trigger a PHP error while error reporting is set to E_ALL, you will spawn some hidden ad... Besides Byethost, I've tried 000Webhost, x10Hosting, 2Freehosting/1Freehosting, Wink.ws, and they are only worse. Okay, I'm running low on ideas. But! What if I just hosted the site myself, on my own computer? That could work. I actually do have my computer on practically 24/7. But not really. Sometimes I need to reboot it, and sometimes we even have power outages. And what if the hardware needs an upgrade? It's not such a big deal for me if the site went down, because I know what's going on; but what about the users? If I do decide to host it myself, is there some way to show users an alternate page instead of them just seeing a generic "server not found" page in the browser when the site is not accessible? Or is there something I have been missing out on? Is there a different kind of "web hosting" solution out there that I haven't heard of? Here is what I'm really looking for: Free (as in, no costs) NO ads Bandwidth enough for a low-traffic forum with roughly 40 users (Semi-)Up-to-date PHP and MySQL (at least not older than a year) No standard (non-extension) PHP functions turned off - such as sleep() The mbstring extension is enabled Disk space: at least 5 MB At least one MySQL database Some bonus points would be: Max execution time of PHP scripts can be set Remote access to MySQL database What would be the best solution for me? Is there one?

    Read the article

  • Do Great Work

    - by user12601034
    Have you ever attended an online conference and actually had a desire to attend all of it?? Yesterday I attended the first day of the Great Work MBA program, sponsored by Box of Crayons and hosted by Michael Bungay Stanier. The topic of the day was “Grounding Yourself,” and the day featured five speakers on five different topics. I have to admit that I started the first session with kind of a “blech” feeling that I didn’t really want to participate, but for some reason I did. So I listened to the first session, and I was hooked. I ended up listening to all of the sessions for the day, and I had some great take-aways from the sessions – my highlights included: The opposite of bravery isn’t fear, it’s settling. In essence, you need to be brave in order to accomplish anything. If you’re settling, you’re not being brave, and your accomplishments will likely be lackluster. Bravery requires confidence and permission. You need to work at being brave by taking small wins, build them up and then take slightly larger risks. Additionally, you need to “claim your own crown.” Nobody in the business world is going to give you permission to be a guru in X – you need to give yourself permission to become a guru in X and then do it. Fall in love with obstacles. Everyone is going to face some form of failure. One way to deal with this is to fall in love with solving the puzzle of obstacles. You don’t have to hit it if you can go around it. Understanding purpose brings out the best in people and the best people. As a leader, drawing in people who are passionate and highly motivated about their work creates velocity for your organization. Being clear about purpose is the first step in doing this. You must own your own story. Everything about you creates a “unique you” that is distinct from everyone else. As you take ownership of this, it becomes part of your strength. It’s not a strength if you’re running away from it. Focus on what’s right. Be aware of your tendency to interpret a situation a certain way and differentiate between helpful and unhelpful interpretations. Three questions for how to think differently: 1) Why? 2) Who says so? 3) What would happen if? These three questions can help you build alternative perspectives and options that can increase resiliency. Even though this first day was focused on “Grounding Yourself,” I see plenty of application in the corporate environment for both individuals and leaders of teams. To apply these highlights to my work environment, I would do the following: Understand the purpose – of my company, of my team and of my role on the team. If I know the purpose, I know what I need to bring to the table to make me, my team and my company successful. Declare your goals…your BEHAGS (big, hairy, audacious goals).Have the confidence to declare what you and/or your team is going to accomplish.Sure, you might have to re-state those goals down the line, but you can learn from that as well. Get creative about achieving your goals.Break down your obstacles by asking yourself what is going to stop you from achieving your goals and then, for each obstacles, ask those three questions:Why?Who says so? What would happen if? Focus on what’s right.I had a manager who asked us to write status reports every week.“Status” consisted of 1) What did I accomplish; 2) What will I accomplish next week; 3) How can my manager help me.The focus on our status report was always “what’s right”(“what’s wrong” was always a conversation at the point in time it was needed). I’m normally a skeptic of online webcasts/conferences, and I normally expect to take away maybe one or two ideas. I’m really glad, however, that I took the time to listen to all of the sessions yesterday, and I hope that my take-aways inspire you to think about how you might do great work also. --

    Read the article

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Welcome to BlogEngine.NET 2.9 using Microsoft SQL Server

    If you see this post it means that BlogEngine.NET 2.9 is running and the hard part of creating your own blog is done. There is only a few things left to do. Write Permissions To be able to log in to the blog and writing posts, you need to enable write permissions on the App_Data folder. If you’re blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support. You need write permissions on the App_Data folder because all posts, comments, and blog attachments are saved as XML files and placed in the App_Data folder.  If you wish to use a database to to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine wiki to get started. Security When you've got write permissions to the App_Data folder, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change the username and password.  Passwords are hashed by default so if you lose your password, please see the BlogEngine wiki for information on recovery. Configuration and Profile Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET 2.9 is set up to take full advantage of of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this. Themes, Widgets & Extensions One last thing to consider is customizing the look of your blog.  We have a few themes available right out of the box including two fully setup to use our new widget framework.  The widget framework allows drop and drag placement on your side bar as well as editing and configuration right in the widget while you are logged in.  Extensions allow you to extend and customize the behaivor of your blog.  Be sure to check the BlogEngine.NET Gallery at dnbegallery.org as the go-to location for downloading widgets, themes and extensions. On the web You can find BlogEngine.NET on the official website. Here you'll find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.  Again, new themes, widgets and extensions can be downloaded at the BlogEngine.NET gallery. Good luck and happy writing. The BlogEngine.NET team

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • Lots of first chance Microsoft.CSharp.RuntimeBinderExceptions thrown when dealing with dynamics

    - by Orion Edwards
    I've got a standard 'dynamic dictionary' type class in C# - class Bucket : DynamicObject { readonly Dictionary<string, object> m_dict = new Dictionary<string, object>(); public override bool TrySetMember(SetMemberBinder binder, object value) { m_dict[binder.Name] = value; return true; } public override bool TryGetMember(GetMemberBinder binder, out object result) { return m_dict.TryGetValue(binder.Name, out result); } } Now I call it, as follows: static void Main(string[] args) { dynamic d = new Bucket(); d.Name = "Orion"; // 2 RuntimeBinderExceptions Console.WriteLine(d.Name); // 2 RuntimeBinderExceptions } The app does what you'd expect it to, but the debug output looks like this: A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll 'ScratchConsoleApplication.vshost.exe' (Managed (v4.0.30319)): Loaded 'Anonymously Hosted DynamicMethods Assembly' A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll A first chance exception of type 'Microsoft.CSharp.RuntimeBinder.RuntimeBinderException' occurred in Microsoft.CSharp.dll Any attempt to access a dynamic member seems to output a RuntimeBinderException to the debug logs. While I'm aware that first-chance exceptions are not a problem in and of themselves, this does cause some problems for me: I often have the debugger set to "break on exceptions", as I'm writing WPF apps, and otherwise all exceptions end up getting converted to a DispatcherUnhandledException, and all the actual information you want is lost. WPF sucks like that. As soon as I hit any code that's using dynamic, the debug output log becomes fairly useless. All the useful trace lines that I care about get hidden amongst all the useless RuntimeBinderExceptions Is there any way I can turn this off, or is the RuntimeBinder unfortunately just built like that? Thanks, Orion

    Read the article

  • RIA Services EntitySet does not support 'Edit' opperation

    - by Savvas Sopiadis
    Hello everbody! Making my first steps in RIA Services (VS2010Beta2) and i encountered this problem: created an EF Model (no POCOs), generic repository on top of it and a RIA Service(hosted in an ASP.NET MVC application) and tryed to get data from within the ASP.NET MVC application: worked well. Next step: Silverlight client. Got a reference to the RIAService (through its context), queried for all the records of the repository and got them into the SL application as well (using this code sample): private ObservableCollection<Culture> _cultures = new ObservableCollection<Culture>(); public ObservableCollection<Culture> cultures { get { return _cultures; } set { _cultures = value; RaisePropertyChanged("cultures"); } } .... //Get cultures EntityQuery<Culture> queryCultures = from cu in dsCtxt.GetAllCulturesQuery() select cu; loCultures = dsCtxt.Load(queryCultures); loCultures.Completed += new EventHandler(lo_Completed); .... void loAnyCulture_Completed(object sender, EventArgs e) { ObservableCollection<Culture> temp= new ObservableCollection<Culture>loAnyCulture.Entities); AnyCulture = temp[0]; } The problem is this: whenever i try to edit some data of a record (in this example the first record) i get this error: This EntitySet of type 'Culture' does not support the 'Edit' operation. I thought that i did something weird and tryed to create an object of type Culture and assign a value to it: it worked well! What am i missing? Do i have to declare an EntitySet? Do i have to mark it? Do i have to...what? Thanks in advance

    Read the article

  • Unable to determine the URL to the Xap file

    - by Matthew Glace
    I started developing a Silverlight (SL) 4 application hosted in an ASP.NET site using Visual Studio 2010. Now I want to change it so that the SL application runs out of browser as if I had not chosen to host it in a web site when I created the SL project. First, I went to the properties of the web application and removed the SL project from the “Silverlight Applications” tab. Then, I went to the properties of the SL project and made sure the “Out-of-browser Application” option was chosen on the “Debug” tab with the SL project selected in the drop list. Finally, I made sure that the SL project was set as the “StartUp” project for the solution. The solution builds successfully however when I try to run I get the following message in an error dialog: Unable to determine the URL to the Xap file from web [web project name]. I'm assuming this is happening because the SL project is trying to copy the Xap file to the ClientBin folder of the web site. Obviously, the SL and web projects are still linked in some way despite my attempt to unlink the two. I have examined each project file in notepad to find any reference between them with no luck. What’s more frustrating is that I can achieve my desired result by creating a new SL application and un-checking the “Host the Silverlight application in a new or existing Web site in the solution” option. I know I could start my project over from scratch to solve this problem however I’m really curious as to what is causing this error. A Google search yielded no results.

    Read the article

  • How to enable gzip HTTP compression on Windows Azure dynamic content

    - by Steven
    Hi all, I've been trying unsuccessfully to enable gzip HTTP compression on my Windows Azure hosted WCF Restful service which returns JSON only from GET and POST requests. I have tried so many things that I would have a hard time listing all of them, and I now realise I have been working with conflicting information (regarding old version of azure etc) so think it best to start with a clean slate! I am working with Visual Studio 2008, using the February 2010 tools for Visual Studio. So, according to the following link, HTTP compression has now been enabled .. http://msdn.microsoft.com/en-us/library/ff436045.aspx ... and I've used the advice at the following page (the URL compression advice only), but I get no compression. http://blog.smarx.com/posts/iis-compression-in-windows-azure <urlCompression doStaticCompression="true" doDynamicCompression="false" dynamicCompressionBeforeCache="true" /> It doesn't help that I don't know what the difference is between urlCompression and httpCompression. I've tried to find out but to no avail! Could the fact that the tools for Visual Studio were released before the version of Azure which supports compression be a problem? I read somewhere that with the latest tools, you can choose which version of Azure OS you want to use when you publish ... but I don't know if that's true, and if it is, I can't find where to choose. Could I be using a pre-http enabled version? I've also tried blowery http compression module, but no results. Does any one have any up-to-date advice on how to achieve this? i.e. advice that relates to the current version of the Azure OS. Cheers! Steven

    Read the article

  • The HTTP request was forbidden with client authentication scheme 'Anonymous'

    - by dudia
    I am trying to configure a WCF server\client to work with SSL I get the following exception: The HTTP request was forbidden with client authentication scheme 'Anonymous' I have a self hosted WCF server. I have run hhtpcfg both my client and server certificates are stored under Personal and Trusted People on the Local Machine Here is the server code: binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; binding.Security.Mode = WebHttpSecurityMode.Transport; _host.Credentials.ClientCertificate.Authentication.CertificateValidationMode = System.ServiceModel.Security.X509CertificateValidationMode.PeerOrChainTrust; _host.Credentials.ClientCertificate.Authentication.RevocationMode = X509RevocationMode.NoCheck; _host.Credentials.ClientCertificate.Authentication.TrustedStoreLocation = StoreLocation.LocalMachine; _host.Credentials.ServiceCertificate.SetCertificate("cn=ServerSide", StoreLocation.LocalMachine, StoreName.My); Client Code: binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; WebChannelFactory<ITestClientForServer> cf = new WebChannelFactory<ITestClientForServer>(binding, url2Bind); cf.Credentials.ClientCertificate.SetCertificate("cn=ClientSide", StoreLocation.LocalMachine, StoreName.My); ServicePointManager.ServerCertificateValidationCallback += RemoteCertificateValidate; Looking at web_tracelog.svclog and trace.log reveals that the server cannot autheticate the client certificate My certificate are not signed by an Authorized CA but this is why I added them to the Trusted People.... What Am I missing? What am I missing?

    Read the article

  • ASP.NET websites under IIS 7.5 (Windows 7) running extremely slow

    - by emzero
    I've just installed Windows 7 x64 Ultimate on my desktop PC. I installed IIS, Visual Studio 2008, registered ASP.NET, etc. I have this ASP.NET 3.5 website I'm working on running EXTREMELY slow on this new IIS. On STA and PROD servers (Windows 2003 Server) and on my old XP/IIS 5.1 everything runs smoothly. A page which usually takes 1-2 seconds to load is taking 8 seconds!!! I saw this post on IIS forum. It says something about Vista/7 not pooling connections (just to let you know, the website is running locally but it's connecting to a SQL Server 2005 hosted on a remote server). It seems that it takes a while to "start loading" the page... I mean, I click refresh and it stays for several seconds "Waiting for localhost"... Then when it gets response it loads the whole page normally... I don't have a clue how to force Win7/IIS7.5 to pool database connections. EDIT: I've created a new empty ASP.NET web application to see if the problems happens too. The answer is no, it responds fast as it should with an empty default page. Maybe is something related to the DB connection. I will do a further test. It should be a way to fix it... EDIT 2: Debugging the app I noticed that the delay occurs AFTER the execution of .NET code (Page_Load, etc)... so the delay seems to be somewhere when IIS serves the page to the browser.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >