Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 209/750 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • Why does "enable-shared failed" happen on libjpeg build for os X?

    - by BryanWheelock
    I'm trying to install libjpeg on os X to fix a problem with the Python Imaging Library JPEG setup. I downloaded libjpeg from http://www.ijg.org/files/jpegsrc.v7.tar.gz I then began to setup the config file cp /usr/share/libtool/config.sub . cp /usr/share/libtool/config.guess . ./configure –enable-shared However, the enable-shared flag didn't seem to work. $ ./configure –-enable-shared configure: WARNING: you should use --build, --host, --target configure: WARNING: invalid host type: –-enable-shared checking build system type... Invalid configuration `–-enable-shared': machine `–-enable' not recognized configure: error: /bin/sh ./config.sub –-enable-shared failed I've done lot's of google searches and I can't figure out where the error is or how to work around this error.

    Read the article

  • Next Quarterly Customer Update Webcast is July 24th (July 25th in Asia Pacific)

    - by R.Hunter
    Join Team Informatics, Kyle Hatlestad from the WebCenter Content “A-Team” and Oracle WebCenter Product Management for the next Oracle WebCenter Quarterly Customer Update Webcast scheduled for July 24th (July 25th in Asia Pacific). Get the latest product management updates and learn more about WebCenter Content and WebCenter Sites. Team Informatics will give an overview of the WebCenter Sites 11g Connector to WebCenter Content and Kyle Hatlestad will discuss best practices for WCC deployment and configuration. You can follow Kyle’s blog at: http://blogs.oracle.com/kyle/ Don't miss out, there will be two live sessions with Q&A. Further details and the registration links for the webcast can be found on our Oracle Technology Network Page.

    Read the article

  • How can I set up a fault-tolerant web-service built with Erlang/OTP?

    - by Jonas
    I would like to setup a fault-tolerant web-service. I will build the web-service with Erlang/OTP. At the beginning the web-service will be hosted on a few VPS. Each VPS has its own IP-address, and I can use more if IPs if I need. I would like to have the domain name pointing to a single IP-address. How can setup my Erlang/OTP-application to be fault-tolerant behind a single IP-address? Do I need to use VLAN? Is there a way my Erlang/OTP-application can use heartbeats and handle virtual IP-addresses to route the traffic? or how should I solve this problem?

    Read the article

  • How to configure Multi-tenant plugin as single-tenant with Spring security plugin as resolver?

    - by Fabien Barbier
    I can create a secure, multi-tenant web app with Grails by : setup spring security plugin, setup Multi-tenant plugin (via multi-tenant install and multi-tenant-spring-security) update config.groovy : tenant { mode = "multiTenant" resolver.type = "springSecurity" } add : Integer userTenntId in User domain add a domain class for tenant Organization associate the tenants with Organization Edit BootStrap.groovy. Everything works fine in multi-tenant mode, but how to use mode = "singleTenant" ? This configuration sound not working : tenant { mode = "singleTenant" resolver.type = "springSecurity" } Edit : I try this config : tenant { mode = "singleTenant" resolver.type = "springSecurity" datasourceResolver.type = "config" dataSourceTenantMap { t1 = "jdbc:hsqldb:file:custFoo" t2 = "jdbc:hsqldb:file:custBar" } } But I get : ERROR errors.GrailsExceptionResolver - Executing action [list] of controller [org.example.TicketController] caused exception: java.lang.StackOverflowError and : Caused by: java.lang.StackOverflowError at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.getTenantIdFromSpringSecurity(SpringSecurityCurrentTenant.groovy:50) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.this$2$getTenantIdFromSpringSecurity(SpringSecurityCurrentTenant.groovy) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant$this$2$getTenantIdFromSpringSecurity.callCurrent(Unknown Source) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.get(SpringSecurityCurrentTenant.groovy:41) at com.infusion.tenant.spring.TenantBeanContainer.getBean(TenantBeanContainer.java:53) at com.infusion.tenant.spring.TenantMethodInterceptor.invoke(TenantMethodInterceptor.java:32) at $Proxy14.getConnection(Unknown Source)

    Read the article

  • ruby rails loop causes server freeze

    - by Darkerstar
    Hi all: I am working on a Ruby on Rails project on Windows. I have Ruby 1.86 and Rails 2.35 installed. Everything is fine until I tried to implement a comet process. I have the following code written to respond to a long poll javascript request. But everytime this function is called, it will hang the whole rails server, no second request can get in, until the timeout. (I know there is juggernaut, but I like to implement one myself first :) Is this due to my server setup? The project will be deployed on a linux server with Ngix and Passenger setup, will it suffer the same problem? def comet_hook timeout(5) do while true do key = 'station_' + station_id.to_s + '_message_lastwrite' if Rails.cache.exist?(key) @cache_time = DateTime.parse(Rails.cache.read(key)) if @cache_time > hook_start @messages = @station.messages_posted_after(hook_start) hook_start = @cache_time break end end end ... end Also with Rails memory store cache, I keep getting "cannot modify frozen object" error, so the above script only worked for me when I switched to File cache. :(

    Read the article

  • How to use MSBuild to create ClickOnce files that match those created by Visual Studio

    - by EasyTimer
    I am trying to use MSBuild in a TFS Build file to create click once files for my application. According to MSDN (http://msdn.microsoft.com/en-us/library/ms165431.aspx) it seems that the project file's click once settings should be used unless you override them with property arguments on the command line. However, even though I have specified (in VS) that some prerequisites should be bootstrapped in with the setup, they don't seem to get installed if I use the MSBuild command line to create the package, even though they do seem to get installed if I use Visual Studio to create the ClickOnce package. Please could someone advise me how to get my prerequisites to get installed via ClickOnce when the clickonce package is built using the MSBuild command line? For information, in Visual Studio, project properties, "Publish" tab, "Prerequisite" button, I have ticked "Create setup program to install prerequisite components", I have ticked my pre-requisites and I have specified "Download prerequisites from the component vendor's web site"

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Continous integration with .net and svn

    - by stiank81
    We're currently not applying the automated building and testing of continous integration in our project. We haven't bothered this far as we're only 2 developers working on it, but even with a team of 2 I still think it would be valuable to use continous integration and get a confirmation that our builds don't break or tests start failing. We're using .Net with C# and WPF. We have created Python-scripts for building the application - using MSbuild - and for running all tests. Our source is in SVN. What would be the best approach to apply continous integration with this setup? What tool should we get? It should be one which doesn't require alot of setup. Simple procedures to get started and little maintanance is a must.

    Read the article

  • Keep it Professional &ndash; Multiple Environments

    - by AjarnMark
    I have certainly been reading blogs a whole lot more than writing them the last several weeks, and it’s about time I got back to writing.  I have been collecting several topics and references for blog posts…some of which will probably just never get written as the timeliness of the topics fade over time.  Nonetheless, I’m back, and I think it is time to revive my Doing Business Right series, this time coming from the slant of managing a development team rather than the previous angle of being self-employed.  First up: separating Dev, Test, and Prod. A few months ago, Colin Stasiuk (@BenchmarkIT) wrote a great post about separating your Dev, Test/UAT, and Prod environments.  This post covers all the important points such as removing Developer access from both PROD and UAT, and the importance of proper deployment (a.k.a. promotion) procedures.  I won’t repeat it all here, go read the original!  But what I do want to address is what I believe to be the #1 excuse people use for not having separate environments:  Money.  I discussed this briefly in my comment on Colin’s post at the time, but let me repeat it here and expand on it a bit. Don’t let the size of your company or the size of its budget dictate whether you do things professionally or not.  I am convinced that most developers and development teams would agree that it is a best practice to have separate environments for development, testing, and production (a.k.a. Live).  So why don’t they?  Because they think that it means separate servers which means more money.  While having separate physical servers for the different environments would be ideal, it is not an absolute requirement in order to make this work.  Here are a few ideas: Use multiple instances of SQL Server and multiple Web Sites with Headers or Ports.  For no additional fees* you can install multiple instances of SQL Server on the same machine.  This gives you a nice separation, allowing you to even use the same database names as will appear in PROD, yet isolating the data and security access.  And in IIS, you can create multiple Web Sites on the same server just by using Host Headers or different port numbers to separate them.  This approach does still pose the risk of non-Prod environments impacting performance on Prod, but when your application is busy enough for that to be a concern, you can probably afford one of the other options. Use desktop PCs instead of servers.  Instead of investing in full server-grade hardware, you can mimic the separate environments on old desktop PCs and at least get functional equivalency, if not performance matching.  The last I checked, Microsoft did not require separate licensing for SQL Server if that installation was used exclusively for dev or test purposes*.  There may be some version or performance differences between this approach and what you have in Prod, but you have isolated test from impacting Prod resources this way. Virtualization.  This is of course one of the hot topics of the day, and I would be remiss if I did not suggest this.  It is quite easy these days to setup virtual machines so that, again, your environments are fairly isolated from one another, and you retain all the security and procedural benefits of having separate environments. So the point is, keep your high professional standards intact.  You don’t need to compromise on using proper procedure just because you work in a small company with a small budget.  Keep doing things the right way! By the way, where I work, our DEV environment is not on a server.  All development is done on the developer’s individual workstation where it can be isolated from other developers’ work for the duration of writing the code, but also where the developers have to reconcile (merge) differences in code under concurrent development.  This usually means that each change is executed multiple times (once per developer to update their environments with the latest changes from others) giving us an extra, informal. test deployment before even going to the Test/UAT server.  It also means that if the network goes down, the developers can continue to hum along because they are not dependent on networked resources.  In fact, they will likely be even more productive because they aren’t being interrupted by email…but that’s another post I need to write. * I am not a lawyer, nor a licensing specialist, but it appeared to be so the last time I checked.  When in doubt, consult an expert on the topic.

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Announcing the New Windows Azure Web Sites Shared Scaling Tier

    - by Clint Edmonson
    Windows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes. Free Shared Reserved # of Sites 10 100 100 Egress 165MB/Day 5GB/Month Included 5GB/Month Included Storage 1GB 1GB 10GB Throttling CPU/Memory/Egress CPU/Memory Unlimited Price Free $.02/hr per site, per instance $.08/hr per core Setting the Stage In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.  The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week. Preview Feedback In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan. New Shared Tier Portal Features In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier. Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site. This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site. Shared Tier Benefits Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model: Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.  Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available. Continuous Deployment from GitHub and CodePlex Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or GitHub.com repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically. Walk-through videos on how to perform these functions are below: Publishing to an Azure Web Site from CodePlex Publishing to an Azure Web Site from GitHub.com These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • Recursive mocking with Rhino-Mocks

    - by jaspernygaard
    Hi I'm trying to unittest several MVP implementations and can't quite figure out the best way to mock the view. I'll try to boil it down. The view IView consists e.g. of a property of type IControl. interface IView { IControl Control1 { get; } IControl Control2 { get; } } interface IControl { bool Enabled { get; set; } object Value { get; set; } } My question is whether there's a simple way to setup the property behavior for Enabled and Value on the IControl interface members on the IView interface - like recursive mocking a guess. I would rather not setup expectations for all my properties on the view (quite a few on each view). Thanks in advance

    Read the article

  • Installing Wordpress - constant PHP/MySQL extension appears missing

    - by Driss Zouak
    I've got Win2003 w/IIS6, PHP 5 and MySQL installed. I can confirm PHP is installed correctly because I have a testMe.php that runs properly. When I run the Wordpress setup, I get informed that Your PHP installation appears to be missing the MySQL extension which is required by WordPress. But in my PHP.ini in the DYNAMIC EXTENSIONS section I have extension=php_mysql.dll extension=php_mysqli.dll I verified that mysql.dll and libmysql.dll are both in my PHP directory. I copied my libmysql.dll to the C:\Windows\System32 directory. When I try to run the initial setup for WordPress, I get this answer. I've Googled setting this up, and everything comes down to the above. I'm missing something, but none of the instructions that I've found online seem to cover whatever that is.

    Read the article

  • Grails Unit Tests: Why does this statement fail?

    - by leeand00
    I've developed in Java in the past, and now I'm trying to learn Grails/Groovy using this slightly dated tutorial. import grails.test.* class DateTagLibTests extends TagLibUnitTestCase { def dateTagLib protected void setUp() { super.setUp() dateTagLib = new DateTagLib() } protected void tearDown() { super.tearDown() } void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) // NOTE: This statement fails assertEquals("the years dont match and I dont know why.", expected, dateTagLib.thisYear()) } } DateTagLibTests.groovy (Note: this TagLibUnitTestCase is for Grails 1.2.1 and not the version used in the tutorial) For some reason the above test fails with: expected:<2010 but was:<2010 I've tried replacing the test above with the following alternate version of the test, and the test passes just fine: void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) String actual = dateTagLib.thisYear() // NOTE: The following two assertions work: assertEquals("the years don\'t match", expected, actual) assertTrue("the years don\'t match", expected.equals(actual)) } These two versions of the test are basically the same thing right? Unless there's something new in Grails 1.2.1 or Groovy that I'm not understanding. They should be of the same type because the values are both the value returned by Calendar.getInstance().get(Calendar.YEAR)

    Read the article

  • Presentations on OVCA & OVN

    - by uwes
    The following three presentations regarding Oracle Virtual Compute Appliance and Oracle SDN from Oracle Open World sessions are now available for download from eSTEP portal. Oracle Virtual Compute Appliance: From Power On to Production in About an Hour Charlie Boyle and Premal Savla give an overview of the Oracle Virtual Compute Appliance. This presentation is a mix of the business and technical slides. Rapid Application Deployment with Oracle Virtual Compute Appliance Kurt Hackel and Saar Maoz, both in Product Development, explain how to use Oracle VM templates to deploy applications faster and walk through a demo with Oracle VM templates for Oracle Database.  Oracle SDN: Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan and Ronen Kofman explain Oracle SDN and provide use cases for multi-tenant private cloud, IaaS, serving Tier 1 application and virtual network services. The presentation can be downloaded from eSTEP portal. URL: http://launch.oracle.com/ PIN: eSTEP_2011 The material can be found under tab eSTEP Download Located under: Recent Updates and Engineered Sysytems/Optimized Solutions

    Read the article

  • Creating a Successful Cloud Roadmap

    - by stephen.g.bennett
    No matter what type of cloud services or deployment models you are considering as part of your overall IT strategy, you must have a cloud services adoption roadmap to guide your journey. A cloud services adoption roadmap provides guidance that enables multiple projects to progress in parallel yet remain coordinated and ultimately result in a common end goal. The cloud services adoption roadmap consists of program-level efforts and a portfolio of cloud services. The program-level effort creates strategic assets such as the cloud architecture, cloud infrastructure, cloud governance, risk, and compliance (GRC) processes, and security policies that are leveraged across all the individual projects. A feature article on this topic can be found in the latest SOA and Cloud Magazine.

    Read the article

  • Exposing business logic as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? If so, why? Do you know anyone who had a similar experience?

    Read the article

  • Release Notes for 9/6/2012

    Below are the release notes from today's deployment. More performance improvements around inline diffs – this time in the pull request view Fixed an issue where sending pull requests was not generating notification emails Fixed some cosmetic issues around viewing file diffs in commit changesets and pull requests Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • SVN directories not showing up in localhost when using WAMP

    - by JsusSalv
    Hi: I recently installed WAMP for actual local use. I've worked on live development servers but now am working on localhost. I've managed to get multiple virtual hosts setup on my WAMP/Vista 64-bit box but am having difficulty with directories pulled from SVN. I have four vhosts setup. Two work well and they are not tied to any SVN just yet. I'm also using TortoiseSVN in case it makes any difference. However, the other projects are coming from SVN repositories. When I view these two projects I get the following error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, admin@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. The way I setup the vhosts is as follows: httpd.conf # Multiple Virtual Hosts <VirtualHost 127.0.0.1> ServerName localhost DocumentRoot "C:/wamp/www/" </VirtualHost> <VirtualHost 127.0.1.0> ServerName testone.local DocumentRoot "C:/wamp/www/root/projectone/" </VirtualHost> <VirtualHost 127.0.2.0> ServerName testtwo.local DocumentRoot "C:/wamp/www/root/projecttwo/" </VirtualHost> <VirtualHost 127.0.3.0> ServerName testthree.local DocumentRoot "C:/wamp/www/root/projectthree/" </VirtualHost> <VirtualHost 127.0.3.1> ServerName testfour.local DocumentRoot "C:/wamp/www/root/projectfour/" </VirtualHost> And here's the 'hosts' file: # Localhost 127.0.0.1 localhost ::1 localhost # Project One 127.0.1.0 testone.local # Project Two 127.0.2.0 testtwo.local # Project Three 127.0.3.0 testthree.local # Project Four 127.0.3.1 testfour.local Everything works just fine. So if you want to tell me I'm doing something wrong then by all means point out a few things. But as it stands, it works and I'm content using different IPs and/or named-based vhosts. The problem comes in not being able to see the directories and files in the projects that are tied to an SVN. Whenever I visit http://testxxxx.local I get the error message at the top of this post. Please provide some suggestions. Thank you!

    Read the article

  • Warning: E-Business Suite Issues with Sun JRE 1.6.0_20

    - by Steven Chan
    My colleagues in the Java division have just released Java Runtime Engine (JRE) 1.6.0_20 today.  See the 1.6.0_20 Update Release Notes for details about what has been changed in this release.The issues reported in the following articles still also apply to JRE 1.6.0_20:Warning: E-Business Suite Issues with Sun JRE 1.6.0_19Warning: E-Business Suite Issues with Sun JRE 1.6.0_18Depending upon your security and Java deployment policies for your end-user desktops, you may need to update your users to this JRE release.  Unfortunately, you will have to balance your need for the fixes in JRE 1.6.0_20 against the impact of the open EBS compatibility issues reported with 6u18, 6u19, 6u20.We're working closely with the Sun JRE team to get the open EBS compatibility issues resolved as quickly as possible.  This is being worked at the top priority.  Please monitor this blog for updates.

    Read the article

  • Pylons 1.0 AttributeError: 'module' object has no attribute 'metadata'

    - by shiki
    Python noob trying to learn Pylons. I'm using the QuickWiki tutorial (http://pylonshq.com/docs/en/1.0/tutorials/quickwiki_tutorial/) from the 1.0 documentation, but this alleged "1.0" doc seems to just be "0.9.7"; I suspect that this has something to do with the error I'm getting. When I execute "paster setup-app development.ini", I get this: (mydevenv)lucid@lucid-laptop:~/QuickWiki$ paster setup-app development.ini Traceback (most recent call last): ... edited for brevity... File "/home/lucid/mydevenv/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 1954, in load File "/home/lucid/QuickWiki/quickwiki/config/middleware.py", line 11, in <module> from quickwiki.config.environment import load_environment File "/home/lucid/QuickWiki/quickwiki/config/environment.py", line 12, in <module> from quickwiki.model import init_model File "/home/lucid/QuickWiki/quickwiki/model/__init__.py", line 27, in <module> pages_table = sa.Table('pages', meta.metadata, AttributeError: 'module' object has no attribute 'metadata' (mydevenv)lucid@lucid-laptop:~/QuickWiki$

    Read the article

  • Installing a new SQL Server instance fails

    - by Rubio
    I've previously in my setup installed SQL Server Express 2005. Now I've switched to SQL Server Express 2008. I updated the command line parameters to those documented for the latter. If the comp already has SQL Server Express 2008 installed, my installer should create a new instance. The command line parameters are as follows: /ACTION=Install /FEATURES=SQLEngine /QS /INSTANCENAME=ABCD /SECURITYMODE=SQL /SAPWD=CunningPassword The requested instance name does not exist on the target machine. This will end in an error -2068643838. The logs show the following error: "No features were installed during the setup execution. The requested features may already be installed." If I remove the /QS parameter and try to install interactively, I'll get as far as the Feature Selection page. The UI shows three options, Instance Features, Shared Features and Redistributable Features. Whatever I select, clicking Next results in the same error (There are validation errors on this page). Any ideas anyone? Thanks, -- Rubio

    Read the article

  • Join Our Call: Sun Storage 2500-M2 Announcement

    - by user797911
    Oracle's Sun Storage 2500-M2 array brings together the latest Fibre Channel (FC) and SAS2 technologies with Oracle's Sun Storage Common Array software from Oracle to create a robust solution that’s equally adept in an entry-level storage area network (SAN) for the mid-size business and integrating into an existing storage network within the enterprise. The Sun Storage 2500-M2 replaces Sun's Storage 2500 array product line and is designed so that the customer may have a quick qualification time for fast and easy deployment in the traditional 2500 environments. Jun Jang, Oracle Principal Product Manager will be hosting this 1 hour live call (a recording will be available), please join us to find out more: Event Date: 24-JUN-11 Event Time: 08:00 am PST/PDT/4pm UK time Web Registration and Access: http://oukc.oracle.com/static09/opn/login/?t=livewebcast|c=1031672594 Access for Mobile Devices: http://my.oracle.com/content/web/cnt636926 Call Provider: Intercall International Participant Dial-In Number: 706-634-8508 Additional International Dial-In Numbers Link: http://www.intercall.com/national/oracleuniversity/gdnam.html Dial-In Passcode: 96395

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >