Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 202/914 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Installing Canon LBP6000 in Ubuntu 12.04

    - by MMA
    This is really frustrating. I am trying to install LBP6000 in Ubuntu 12.04 without any success. (Well, I had success about a week back when I first bought the printer and finally printed pages after a struggle of several hours. Then today it suddenly stopped working and I uninstalled everything and started from scratch. Now, I seem to have lost the way.) My steps Downloaded the latest Canon driver from Canon site. File Linux_CAPT_PrinterDriver_V240_uk_EN.tar.gz Got the radu script (I am allowed only two hyperlinks, so can not put the link here. You can Google radu Canon) Changed the /etc/modprobe.d/blacklist-cups-usblp.conf file as instructed in, Official Documentation. (See the section Ubuntu 12.04 Install). Now this file looks like, # cups talks to the raw USB devices, so we need to blacklist usblp to avoid # grabbing them # blacklist usblp Rebooted my machine Changed the port in radu script to 59787 as instructed in the link at step 3. (Again see the section Ubuntu 12.04 Install, or see the comment at How to Install Canon LBP Printers in Ubuntu. Also put the latest deb files from step 1 in the appropriate directory of this script. Ran the radu script. A printer, LBP6000 got added. Not two printers, one to be disabled, as appeared in the message on the terminal after running the script. sudo /etc/init.d/ccpd status shows, Canon Printer Daemon for CUPS: ccpd: 3142 3139 Results The printer does not print. Printer state (from System Setting-Printing, or at cups http interface localhost:631/printers/LBP6000) goes from Idle to Processing, a job appears in print queue, and then the job disappears and the printer state goes back to Idle. The actual printer does not even blink. Diagnostics (got help from the link in step 3, Troubleshooting) captstatusui -P LBP6000 shows communication error lsmod | grep usblp did not show anything. After running, sudo modprobe usblp, shows usblp 17885 0 However, ls -l /dev/usb/lp0 gives, ls: cannot access /dev/usb/lp0: No such file or directory /var/ccpd did not exist, created, sudo mkdir /var/ccpd sudo mkfifo /var/ccpd/fifo0 sudo chown -R lp:lp /var/ccpd Any suggestion will be appreciated. Do not know what to do.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • My Package Version Number Appears Greater Yet apt-get Doesn't Select It

    - by nutznboltz
    Backstory: It was determined that when using lxc container VMs the Nagios nrpe shutdown script when run on the host of the containers would kill the nrpe processes inside the containers. This was remediated by changing the script to use pidfiles instead of searching the process table for the nrpe process. Regrettably start-stop-daemon is a C program that resulted from translating a Perl script and it shows. There are far too many global varibles in start-stop-daemon.c and although there are some nice blocks of comments there are far to few comments that explain the intent behind variable names such as "schedule" (the string "schedule" appears in many contexts.) The manual page for start-stop-daemon strongly suggests that unless you use the "--retry" option the start-stop-daemon program may return before the process it sent a signal to actually calls exit() and terminates, however it doesn't actually state this in plain English. The obtuseness of start-stop-daemon is most likely the reason that the "fixed" version of the script includes a dubious comment indicating that sometimes the pid file has not been removed. I can easily see why someone would not understand that he left the --retry option missing. This bug also causes failures when the script is given the "restart" option; the nrpe daemon will shutdown but not start up again. Did I mention that since applying the update our nrpe servers started crashing over and over? Repairing this is why I am doing this work. I have been working on remediating the fix. You can see my current work in this PPA. Actual Question: The upstream version number of nagios-nrpe-server in lucid-updates is 2.12-4ubuntu1.10.04.1 My PPA uses this version number 2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 I check the rules here and use this test program and I am lead to believe that the version number I use in my PPA is greater than the one in lucid-updates yet when I ran: sudo add-apt-repository ppa:nutznboltz/nrpe-unbreak-lp-600941 sudo apt-get update sudo aptitiude dist-upgrade The replacement package was not installed. I was able to install it using sudo aptitude install nagios-nrpe-server=2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 Can anyone explain this behavior? Why didn't my version number appear greater to "aptitude dist-upgrade"? Thanks $ cat /etc/apt/preferences Package: * Pin: release a=lucid-backports Pin-Priority: 400 Package: * Pin: release a=lucid-security Pin-Priority: 990 Package: * Pin: release a=lucid-updates Pin-Priority: 900 Package: * Pin: release a=lucid-proposed Pin-Priority: 400 $ ls /etc/apt/preferences.d/ $ Should not make any difference as a PPA cannot be in any of those pockets. I went ahead and bumped the version number in the PPA to 2.12-4ubuntu1.10.04.2~ppa1~lucid1. I'll see if that makes a difference. I do notice that lintian complains: W: nagios-nrpe-server: debian-revision-not-well-formed 2.12-4ubuntu1.10.04.2~ppa1~lucid1

    Read the article

  • Running Multiple Queries in Oracle SQL Developer

    - by thatjeffsmith
    There are two methods for running queries in SQL Developer: Run Statement Run Statement, Shift+Enter, F9, or this button Run Script No grids, just script (SQL*Plus like) ouput is fine, thank you very much! What’s the Difference? There are some obvious differences between the two features, the most obvious being the format of the output delivered. But there are some other, more subtle differences here, primarily around fetching. What is Fetch? After you run send your query to Oracle, it has to do 3 things: Parse Execute Fetch Technically it has to do at least 2 things, and sometimes only 1. But, to get the data back to the user, the fetch must occur. If you have a 10 row query or a 1,000,000 row query, this can mean 1 or many fetches in groups of records. Ok, before I went on the Fetch tangent, I said there were two ways to run statements in SQL Developer: Run Statement Run statement brings your query results to a grid with a single fetch. The user sees 50, 100, 500, etc rows come back, but SQL Developer and the database know that there are more rows waiting to be retrieved. The process on the server that was used to execute the query is still hanging around too. To alleviate this, increase your fetch size to 500. Every query ran will come back with the first 500 rows, and rows will be continued to be fetched in 500 row increments. You’ll then see most of your ad hoc queries complete with a single fetch. Scroll down, or hit Ctrl+End to force a full fetch and get all your rows back. Run Script Run Script runs the contents of the worksheet (or what’s highlighted) as a ‘script.’ What does that mean exactly? Think of this as being equivalent to running this in SQL*Plus: @my_script.sql; Each statement is executed. Also, ALL rows are fetched. So once it’s finished executing, there are no open cursors left around. The more obvious difference here is that the output comes back formatted as plain old text. Run one or more commands plus SQL*Plus commands like SET and SPOOL The Trick: Run Statement Works With Multiple Statements! It says ‘run statement,’ but if you select more than one with your mouse and hit the button – it will run each and throw the results to 1 grid for each statement. If you mouse hover over the Query Result panel tab, SQL Developer will tell you the query used to populate that grid. This will work regardless of what you have this preference set to: DATABASE – WORKSHEET – SHOW QUERY RESULTS IN NEW TABS Mind the fetch though! Close those cursors by bring back all the records or closing the grids when you’re done with them.

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!! Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" } To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet UPDATE: THis post prompted Chad Miller to write a post describing his Powershell add-in that utilises a SSIS API to do exporting of packages. Go take a read here: http://sev17.com/2011/02/importing-and-exporting-ssis-packages-using-powershell/

    Read the article

  • What is new in Oracle SOA Suite 11g R1 PS6? by Shanny Anoep

    - by JuergenKress
    Oracle has released a new version 11.1.1.7.0 for their Oracle Fusion Middleware product line. This version includes Patch Set #6 (PS6) for Oracle SOA Suite 11g R1, with a big list of improvements and fixes for each component in that suite. In this post we will highlight some of the interesting updates with regards to troubleshooting, performance, reliability and scalability. Infrastructure/Purging scripts Database growth is a common problem for large-scale Oracle SOA Suite deployments. Oracle already provides multiple purging strategies for the SOA Suite runtime database. This patch set includes two new scripts for purging most of the runtime data: Table Recreation Script (TRS): This script can be used to reclaim as much database space as possible, while still retaining the open instances. It can be used as a corrective action for databases that grew excessively, for example when purging was not performed at all. This should be used as a single corrective action only; the script does not replace the normal purging scripts. Truncate script: Remove all records from the SOA Suite runtime tables without dropping the tables. This script can be used for cloning SOA Suite environments without copying the instance data, or for recreating test scenarios by cleaning all the runtime data. The Oracle SOA Suite Administrator's guide contains a table with the available purging strategies. Diagnostic dumps Using WLST you could already dump diagnostic information about various components of the SOA Suite. This version adds support to retrieve more information on BPEL and Adapters from the command-line. Diagnostic dumps for BPEL New diagnostic dumps are available for BPEL to get information on thread pools, average processing time for BPEL components, and average waiting times for asynchronous instances. This information can be very useful for performance analysis or troubleshooting. With WLST this information can be retrieved from the command-line and included for monitoring or reporting. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Suite PS6,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • How to Enable JavaScript file API in IE8 [closed]

    - by saeed
    i have developed a web application in asp.net , there is a page in this project which user should choose a file in picture format (jpeg,jpg,bmp,...) and i want to preview image in the page but i don't want to post file to server i want to handle it in client i have done it with java scripts functions via file API but it only works in IE9 but most of costumers use IE8 the reason is that IE8 doesn't support file API is there any way to make IE8 upgrade or some patches in code behind i mean that check if the browser is IE and not support file API call a function which upgrades IE8 to IE9 automatically. i don't want to ask user to do it in message i want to do it programmatic !! even if it is possible install a special patch that is required for file API because customers thought it is a bug in my application and their computer knowledge is low what am i supposed to do with this? i also use Async File Upload Ajax Control But it post the file to server any way with ajax solution and http handler but java scripts do it all in client browser!!! following script checks the browser supports API or not <script> if (window.File && window.FileReader && window.FileList && window.Blob) document.write("<b>File API supported.</b>"); else document.write('<i>File API not supported by this browser.</i>'); </script> following scripts do the read and Load Image function readfile(e1) { var filename = e1.target.files[0]; var fr = new FileReader(); fr.onload = readerHandler; fr.readAsText(filename); } HTML code: <input type="file" id="getimage"> <fieldset><legend>Your image here</legend> <div id="imgstore"></div> </fieldset> JavaScript code: <script> function imageHandler(e2) { var store = document.getElementById('imgstore'); store.innerHTML='<img src="' + e2.target.result +'">'; } function loadimage(e1) { var filename = e1.target.files[0]; var fr = new FileReader(); fr.onload = imageHandler; fr.readAsDataURL(filename); } window.onload=function() { var x = document.getElementById("filebrowsed"); x.addEventListener('change', readfile, false); var y = document.getElementById("getimage"); y.addEventListener('change', loadimage, false); } </script>

    Read the article

  • Using jQuery to Make a Field Read Only in SharePoint

    - by Mark Rackley
    Okay… this will be my shortest blog post EVER. Very little rambling.. I promise, and I’m sure this has been blogged more than once, so I apologize for adding to the noise, but like I always say, I blog for myself so I have a global bookmark. So,let’s say you have a field on a SharePoint Form and you want to make it read only. You COULD just open it up in SPD and easily make it read only, but some people are purists and don’t like use SPD or modify the default new/edit/disp forms. Put me in the latter camp, I try to avoid modifying these forms and it seemed like such a simple task that I didn’t want to create a new un-ghosted form.  So.. how do you do it? It’s only one line of jQuery. All you need to do is find the id for your input field and capture the keypress on it so that it cannot be modified (you should probably capture clicks for dropdowns/checkboxes/etc. but I didn’t need to).  Anyway, here’s the entire script: <script src="jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(document).ready(function($){ //capture keypress on our read only field and return false $('#idOfInputField').keypress(function() { return false; }); }) </script>   You can find the ID of your input field by viewing the source, this ID stays consistent as long as you don’t muck with the list or form in the wrong way.  Please note, you CANNOT disable the input field as an alternative to capturing the keypress. If you do this and save the form, any data in the disabled fields will be wiped out. There are probably a dozen ways to make a field read-only and if all you are using jQuery for is to make a field read-only, then you might want to question your use of adding the overhead (although it’s really not that much). Hey.. it’s another tool for your tool belt.

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • Do I need afxres.h, if I am not using MFC? How do I remove it from the .RC script?

    - by Cheeso
    I don't know RC scripts. I want to include Product version, File version, etc. metadata into a DLL I'm building. I'm using an .rc file to do that. The build is makefile driven. I'm following along with an example .rc scrpit I found. The template .rc file includes afxres.h , but I don't think I need that. But if I just remove it I get a bunch of compile errors. What does a basic, non-MFC RC script look like? Can I remove all the stuff like this: ///////////////////////////////////////////////////////////////////////////// // English (U.S.) resources #if !defined(AFX_RESOURCE_DLL) || defined(AFX_TARG_ENU) #ifdef _WIN32 LANGUAGE LANG_ENGLISH, SUBLANG_ENGLISH_US #pragma code_page(1252) #endif //_WIN32 ....

    Read the article

  • Write a GreaseMonkey script that reacts to domain strings (for I18N, e.g. cn,en,fr,etc.)

    - by Shizhidi
    Hello. Suppose there is a website that supports multiple languages: cn.mydomain.com or mydomain.com/cn or mydomain.cn en.mydomain.com or mydomain.com/en or mydomain.com fr.mydomain.com or mydomain.com/fr or mydomain.fr I want to write a GreaseMonkey script that has variables assigned different strings/values according to the address the user is loading the page from. How do you do that? Thanks EDIT: I realize I can just use JavaScript to get the address. Does GreaseMonkey itself support this kind of function?

    Read the article

  • How to "select file" with Python script? . Google App Engine . Python .

    - by draconisthe0ry
    I'm trying to create an online application for a python function i have created. in my script, i input the path of my file for the computer (input_path = '/users/user/desktop/input.txt') but i'm not sure how to go about this using Google App Engine . I have the choice between 3 templates: flask, django, and bottle . I really do believe this question is relevant for people transitioning from scripts to web-based applications. Do I need to incorporate GUI stuff from Tkinter or something? There has to be a way to simply select a file to use for the input path in an interactive way using python scripts

    Read the article

  • Asyncronus javascript rendering widgets

    - by Joe J
    Hey all, I'm creating a javascript widget so third partys (web designers) can post a link on their website and it will render the widget on their site. Currently, I'm doing this with just a script link tag: <div class="some_random_div_in_html_body"> <script type='text/javascript' src='http://remotehost.com/link/to/widget.js'></script> </div> However, this has the side-effect of slowing down a thrid party's website render times of the page if my site is under a load. Therefore, I'd like the third party website to request the widget link from my site asyncronously and then render it on their site when the widget link loads completely. The Google Analytics javascript snippet seems to have a nice bit of asyncronous code that does a nice async request to model off of, but I'm wondering if I can modify it so that it will render content on the third party's site. Using the example below, I want the content of http://mysite.com/link/to/widget.js to render something in the "message" id field. <HTML> <HEAD><TITLE>Third Party Site</TITLE><STYLE>#message { background-color: #eee; } </STYLE></HEAD> <BODY> <div id="message">asdf</div> <script type="text/javascript"> (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = 'http://mysite.com/link/to/widget.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </BODY> </HTML> I don't know if what I'm trying to do constitutes Cross Site Scripting (still a bit vague on that concept) but am wondering if what I'm trying to do is possible. Or if anyone has any other approaches to creating javascript widgets effectively, I'd appreciate any advice. Thanks for reading this. Joe

    Read the article

  • Static Javascript files not loaded in Express app

    - by Dave Long
    I have an express app that has a bunch of static javascript files that aren't being loaded even though they are registered in my app.js file. Even public scripts (like jQuery: http://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js) aren't processing. I can see the script tags in the generated html, but none of the functionality runs and I can't see the files loading in the web inspector. Here is the code that I have: app.js var express = require('express') var app = module.exports = express.createServer(); // Configuration var port = process.env.PORT || 3000; app.configure(function(){ app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.bodyParser()); app.use(express.methodOverride()); app.use(app.router); app.use(express.static(__dirname + '/public')); }); app.configure('development', function(){ app.use(express.errorHandler({ dumpExceptions: true, showStack: true })); }); app.configure('production', function(){ app.use(express.errorHandler()); }); // Routes app.get('/manage/new', function(req, res){ res.render('manage/new', { title: 'Create a new widget' }); }) app.listen(port); console.log("Express server listening on port %d in %s mode", app.address().port, app.settings.env); /views/manage/layout.jade !!! 5 html(lang="en") head title= title link(rel="stylesheet", href="/stylesheets/demo.css") link(rel="stylesheet", href="/stylesheets/jquery.qtip.css") script(type="text/javascript", href="http://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js") body!= body script(type="text/javascript", href="/javascripts/jquery.formalize.js") script(type="text/javascript", href="/javascripts/jquery.form.js") script(type="text/javascript", href="/javascripts/jquery.qtip.js") script(type="text/javascript", href="/javascripts/formToWizard.js") script(type="text/javascript", href="/javascripts/widget.js") /views/manage/new.jade h1= title div(style="float:left;") form(action="/manage/generate", method="post", enctype="multipart/form-data", name="create-widget") .errors fieldset legend Band / Album Information fieldset legend Social Networks fieldset legend Download All of my javascript files are stored in /public/javascripts and all of my static CSS files are being served up just fine. I'm not sure what I've done wrong.

    Read the article

  • jQuery TypeError: example("input#autocomplete").autocomplete is not a function

    - by Ankush Kalia
    I have tried alot to remove this error but could not get success.When i am running this script on localhost its working fine but not working on Joomla frame work. The code is below: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <?php $viewFields=array('c++', 'java', 'php', 'coldfusion', 'javascript', 'asp', 'ruby'); ?> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css" rel="stylesheet" type="text/css"/> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <script> var example=jQuery.noConflict(); var arrayFromPHP = <?php echo json_encode($viewFields) ?>; example(document).ready(function() { example("input#autocomplete").autocomplete({ source: arrayFromPHP }); }); </script> </head> <body> <center> <p><img src="<?php echo JURI::base(); ?>images/search_1.png" border="0" alt="" /> <img src="<?php echo JURI::base(); ?>images/business_2.png" border="0" alt="" /> <img src="<?php echo JURI::base(); ?>images/review_3.png" border="0" alt="" /> </p> </center> <input id="autocomplete" /> </body> </html> Its giving me this error:- -- [08:30:24.870] Use of getAttributeNode() is deprecated. Use getAttribute() instead. @ http://50.116.97.120/~amarhost/storage/media/system/js/mootools-core.js:343 [08:30:27.853] TypeError: example("input#autocomplete").autocomplete is not a function @ http://50.116.97.120/~amarhost/storage/index.php/component/storage/?action=war&Itemid=105:210

    Read the article

  • Drawing text to <canvas> with @font-face does not work at the first time

    - by lemonedo
    Hi all, First try the test case please: http://lemon-factory.net/test/font-face-and-canvas.html I'm not good at English, so I made the test case to be self-explanatory. On the first click to the DRAW button, it will not draw text, or will draw with an incorrect typeface instead of the specified "PressStart", according to your browser. After then it works as expected. At the first time the text does not appear correctly in all browsers I've tested (Firefox, Google Chrome, Safari, Opera). Is it the standard behavior or something? Thank you. PS: Following is the code of the test case <!DOCTYPE html> <html> <head> <meta http-equiv=Content-Type content="text/html;charset=utf-8"> <title>@font-face and canvas</title> <style> @font-face { font-family: 'PressStart'; src: url('http://lemon-factory.net/css/fonts/prstart.ttf'); } canvas, pre { border: 1px solid #666; } pre { float: left; margin: .5em; padding: .5em; } </style> </head> <body> <div> <canvas id=canvas width=250 height=250> Your browser does not support the CANVAS element. Try the latest Firefox, Google Chrome, Safari or Opera. </canvas> <button>DRAW</button> </div> <pre id=style></pre> <pre id=script></pre> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script> var canvas = document.getElementById('canvas') var ctx = canvas.getContext('2d') var x = 30 var y = 10 function draw() { ctx.font = '12px PressStart' ctx.fillStyle = '#000' ctx.fillText('Hello, world!', x, y += 20) ctx.fillRect(x - 20, y - 10, 10, 10) } $('button').click(draw) $('pre#style').text($('style').text()) $('pre#script').text($('script').text()) </script> </body> </html>

    Read the article

  • Django - I got TemplateSyntaxError when I try open the admin page. (http://DOMAIN_NAME/admin)

    - by user140827
    I use grappelly plugin. When I try open the admin page (/admin) I got TemplateSyntaxError. It says 'get_generic_relation_list' is invalid block tag. TemplateSyntaxError at /admin/ Invalid block tag: 'get_generic_relation_list', expected 'endblock' Request Method: GET Request URL: http://DOMAIN_NAME/admin/ Django Version: 1.4 Exception Type: TemplateSyntaxError Exception Value: Invalid block tag: 'get_generic_relation_list', expected 'endblock' Exception Location: /opt/python27/django/1.4/lib/python2.7/site-packages/django/template/base.py in invalid_block_tag, line 320 Python Executable: /opt/python27/django/1.4/bin/python Python Version: 2.7.0 Python Path: ['/home/vhosts/DOMAIN_NAME/httpdocs/media', '/home/vhosts/DOMAIN_NAME/private/new_malinnikov/lib', '/home/vhosts/DOMAIN_NAME/httpdocs/', '/home/vhosts/DOMAIN_NAME/private/new_malinnikov', '/home/vhosts/DOMAIN_NAME/private/new_malinnikov', '/home/vhosts/DOMAIN_NAME/private', '/opt/python27/django/1.4', '/home/vhosts/DOMAIN_NAME/httpdocs', '/opt/python27/django/1.4/lib/python2.7/site-packages/setuptools-0.6c12dev_r88846-py2.7.egg', '/opt/python27/django/1.4/lib/python2.7/site-packages/pip-0.8.1-py2.7.egg', '/opt/python27/django/1.4/lib/python27.zip', '/opt/python27/django/1.4/lib/python2.7', '/opt/python27/django/1.4/lib/python2.7/plat-linux2', '/opt/python27/django/1.4/lib/python2.7/lib-tk', '/opt/python27/django/1.4/lib/python2.7/lib-old', '/opt/python27/django/1.4/lib/python2.7/lib-dynload', '/opt/python27/lib/python2.7', '/opt/python27/lib/python2.7/plat-linux2', '/opt/python27/lib/python2.7/lib-tk', '/opt/python27/django/1.4/lib/python2.7/site-packages', '/opt/python27/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/flup-1.0.3.dev_20100525-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/virtualenv-1.5.1-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/SQLAlchemy-0.6.4-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/SQLObject-0.14.1-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/FormEncode-1.2.3dev-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/MySQL_python-1.2.3-py2.7-linux-x86_64.egg', '/opt/python27/lib/python2.7/site-packages/psycopg2-2.2.2-py2.7-linux-x86_64.egg', '/opt/python27/lib/python2.7/site-packages/pysqlite-2.6.0-py2.7-linux-x86_64.egg', '/opt/python27/lib/python2.7/site-packages', '/opt/python27/lib/python2.7/site-packages/PIL'] Server time: ???, 7 ??? 2012 04:19:42 +0700 Error during template rendering In template /home/vhosts/DOMAIN_NAME/httpdocs/templates/grappelli/admin/base.html, error at line 28 Invalid block tag: 'get_generic_relation_list', expected 'endblock' 18 <!--[if lt IE 8]> 19 <script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE8.js" type="text/javascript"></script> 20 <![endif]--> 21 {% block javascripts %} 22 <script type="text/javascript" src="{% admin_media_prefix %}jquery/jquery-1.3.2.min.js"></script> 23 <script type="text/javascript" src="{% admin_media_prefix %}js/admin/Bookmarks.js"></script> 24 <script type="text/javascript"> 25 // Admin URL 26 var ADMIN_URL = "{% get_admin_url %}"; 27 // Generic Relations 28 {% get_generic_relation_list %} 29 // Get Bookmarks 30 $(document).ready(function(){ 31 $.ajax({ 32 type: "GET", 33 url: '{% url grp_bookmark_get %}', 34 data: "path=" + escape(window.location.pathname + window.location.search), 35 dataType: "html", 36 success: function(data){ 37 $('ul#bookmarks').replaceWith(data); 38 }

    Read the article

  • Why does this JSON.parse code not work?

    - by SuZi
    I am trying to pass json encoded values from a php script to a, GnuBookTest.js, javascript file that initiates a Bookreader object and use the values i have passed in via the variable i named "result". The php script is sending the values like: <div id="bookreader"> <div id="BookReader" style="left:10px; right:10px; top:30px; bottom:30px;">x</div> <script type="text/javascript">var result = {"istack":"zi94sm65\/BUCY\/BUCY200707170530PM","leafCount":"14","wArr":"[893,893,893,893,893,893,893,893,893,893,893,893,893,893]","hArr":"[1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155]","leafArr":"[0,1,2,3,4,5,6,7,8,9,10,11,12,13]","sd":"[\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\"]"}</script> <script type="text/javascript" src="http://localhost:8080/application/js/GnuBookTest.js"></script> </div> </div> and in the GnuBookTest.js file i am trying to use the values like: br = new BookReader(); // Return the width of a given page. br.getPageWidth = function(index) { return this.pageW[index]; } // Return the height of a given page. br.getPageHeight = function(index) { return this.pageH[index]; } br.pageW = JSON.parse(result.wArr); br.pageH = JSON.parse(result.hArr); br.leafMap = JSON.parse(result.leafArr); //istack is an url fragment for location of image files var istack = result.istack; . . . Using JSON.parse as i have written it above loads the Bookreader and uses my values correctly in a few web-browsers: Firefox, IE8, and desktop-Safari; but does not work at all in mac-Chrome, mobile-Safari, plus older versions of IE. Mobile safari keeps giving me a reference error msg: can't find variable: JSON. The other browsers just do not load the Bookreader and show the "x" instead, like they did not get the values from the php script. Where is the problem?

    Read the article

  • content show problem

    - by nonab
    I still fight with some jquery scripts:) With my first problem Jens Fahnenbruck helped me here: http://stackoverflow.com/questions/3021476/problem-with-hide-show-in-jquery thanks:) Now i added another fancy thing - jquery tabs Made a few modifications and it works like this: When you click on tab and it loads different main image for every tab. The problem is that i used $(document).ready(function() to handle those image changes. When i click any of 2x2 box images (on any tab) it will permanently change the image on the right and when i click on tabs it won't work like it did at the beginning. online example: http://rarelips.ayz.pl/testy/2/ code: <style type="text/css"> body { font: Arial, Helvetica, sans-serif normal 10px; margin: 0; padding: 0; } * {margin: 0; padding: 0;} img {border: none;} .container { height: 500px; width: 1000px; margin: -180px 0 0 -450px; top: 50%; left: 50%; position: absolute; } ul.thumb { float: left; list-style: none; margin: 0; padding: 10px; width: 360px; } ul.thumb li { margin: 0; padding: 5px; float: left; position: relative; width: 165px; height: 165px; } ul.thumb li img { width: 150px; height: 150px; border: 1px solid #ddd; padding: 10px; background: #f0f0f0; position: absolute; left: 0; top: 0; -ms-interpolation-mode: bicubic; } ul.thumb li img.hover { background:url(thumb_bg.png) no-repeat center center; border: none; } #main_view { float: left; padding: 9px 0; margin-left: -10px; } #main_view2 { float: left; padding: 9px 0; margin-left: -10px; } #main_view3 { float: left; padding: 9px 0; margin-left: -10px; } #main_view4 { float: left; padding: 9px 0; margin-left: -10px; } #wiecej { float: right; padding: 9px 0; margin-right: 20px; } .demo-show { width: 350px; margin: 1em .5em; } .demo-show h3 { margin: 0; padding: .25em; background: #bfcd93; border-top: 1px solid #386785; border-bottom: 1px solid #386785; } .demo-show div { padding: .5em .25em; } /* styl do tabek */ ul.tabs { margin: 0; padding: 0; float: left; list-style: none; height: 32px; /*--Set height of tabs--*/ border-bottom: 1px solid #999; border-left: 1px solid #999; width: 100%; } ul.tabs li { float: left; margin: 0; padding: 0; height: 31px; /*--Subtract 1px from the height of the unordered list--*/ line-height: 31px; /*--Vertically aligns the text within the tab--*/ border: 1px solid #999; border-left: none; margin-bottom: -1px; /*--Pull the list item down 1px--*/ overflow: hidden; position: relative; background: #e0e0e0; } ul.tabs li a { text-decoration: none; color: #000; display: block; font-size: 1.2em; padding: 0 20px; border: 1px solid #fff; /*--Gives the bevel look with a 1px white border inside the list item--*/ outline: none; } ul.tabs li a:hover { background: #ccc; } html ul.tabs li.active, html ul.tabs li.active a:hover { /*--Makes sure that the active tab does not listen to the hover properties--*/ background: #fff; border-bottom: 1px solid #fff; /*--Makes the active tab look like it's connected with its content--*/ } .tab_container { border: 1px solid #999; border-top: none; overflow: hidden; clear: both; float: left; width: 100%; background: #fff; } .tab_content { padding: 20px; font-size: 1.2em; } </style> <script type="text/javascript" src="index_pliki/jquery-latest.js"></script> <script type="text/javascript"> $(document).ready(function(){ //Larger thumbnail preview $("ul.thumb li").hover(function() { $(this).css({'z-index' : '10'}); $(this).find('img').addClass("hover").stop() .animate({ marginTop: '-110px', marginLeft: '-110px', top: '50%', left: '50%', width: '200px', height: '200px', padding: '5px' }, 200); } , function() { $(this).css({'z-index' : '0'}); $(this).find('img').removeClass("hover").stop() .animate({ marginTop: '0', marginLeft: '0', top: '0', left: '0', width: '150px', height: '150px', padding: '10px' }, 400); }); //Swap Image on Click $("ul.thumb li a").click(function() { var mainImage = $(this).attr("href"); //Find Image Name $("#main_view img").attr({ src: mainImage }); $("#main_view2 img").attr({ src: mainImage }); $("#main_view3 img").attr({ src: mainImage }); $("#main_view4 img").attr({ src: mainImage }); return false; }); }); </script> <script type="text/javascript"> $(document).ready(function() { $("#main_view img").attr({ src: './index_pliki/max1.jpg' }); $("#slickbox div[data-id=" + '01' + "].slickbox").show('slow'); $('a.slick-toggle').click(function() { var dataID = $(this).attr("data-id"); $('#slickbox div.slickbox').hide(); $("#slickbox div[data-id=" + dataID + "].slickbox").show('slow'); return false; }); }); </script> <script type="text/javascript"> $(document).ready(function() { $("#main_view2 img").attr({ src: './index_pliki/max2.jpg' }); $("#slickbox2 div[data-id=" + '11' + "].slickbox2").show('slow'); $('a.slick-toggle').click(function() { var dataID = $(this).attr("data-id"); $('#slickbox2 div.slickbox2').hide(); $("#slickbox2 div[data-id=" + dataID + "].slickbox2").show('slow'); return false; }); }); </script> <script type="text/javascript"> $(document).ready(function() { $("#main_view3 img").attr({ src: './index_pliki/max3.jpg' }); $("#slickbox3 div[data-id=" + '21' + "].slickbox3").show('slow'); $('a.slick-toggle').click(function() { var dataID = $(this).attr("data-id"); $('#slickbox3 div.slickbox3').hide(); $("#slickbox3 div[data-id=" + dataID + "].slickbox3").show('slow'); return false; }); }); </script> <script type="text/javascript"> $(document).ready(function() { $("#main_view4 img").attr({ src: './index_pliki/max4.jpg' }); $("#slickbox4 div[data-id=" + '31' + "].slickbox4").show('slow'); $('a.slick-toggle').click(function() { var dataID = $(this).attr("data-id"); $('#slickbox4 div.slickbox4').hide(); $("#slickbox4 div[data-id=" + dataID + "].slickbox4").show('slow'); return false; }); }); </script> <script type ="text/javascript"> $(document).ready(function() { //When page loads... $(".tab_content").hide(); //Hide all content $("ul.tabs li:first").addClass("active").show(); //Activate first tab $(".tab_content:first").show(); //Show first tab content //On Click Event $("ul.tabs li").click(function() { $("ul.tabs li").removeClass("active"); //Remove any "active" class $(this).addClass("active"); //Add "active" class to selected tab $(".tab_content").hide(); //Hide all tab content var activeTab = $(this).find("a").attr("href"); //Find the href attribute value to identify the active tab + content $(activeTab).fadeIn(); //Fade in the active ID content return false; }); }); </script> </head> <body> <div class="container"> <ul class="tabs"> <li><a href="#tab1">1</a></li> <li><a href="#tab2">2</a></li> <li><a href="#tab3">3</a></li> <li><a href="#tab4">4</a></li> </ul> <div class="tab_container"> <div id="tab1" class="tab_content"> <!--Content--> <ul class="thumb"> <li><a class="slick-toggle" href="./index_pliki/max1.jpg" data-id="01"><img src="./index_pliki/min1.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max2.jpg" data-id="02"><img src="./index_pliki/min2.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max3.jpg" data-id="03"><img src="./index_pliki/min3.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max4.jpg" data-id="04"><img src="./index_pliki/min4.jpg" alt="" /></a></li> </ul> <div id="main_view"> <a href="index.htm"><img src="index_pliki/max1.jpg" alt=""/></a> <small style="float: right; color: rgb(153, 153, 153);"> </small> </div> <div id="wiecej"> <div id="slickbox"> <div id="someOtherID" class="slickbox" data-id="01" style="display: none;"> 1.1 </div> <div id="someOtherID" class="slickbox" data-id="02" style="display: none;"> 1.2 </div> <div id="someOtherID" class="slickbox" data-id="03" style="display: none;"> 1.3 </div> <div id="someOtherID" class="slickbox" data-id="04" style="display: none;"> 1.4 </div> <!-- <a href="#" id="slick-show"><img src="http://www.amptech.pl/images/more.jpg" alt="Zobacz wiecej" /></a> <a href="#" id="slick-hide"><img src="http://www.amptech.pl/images/online.jpg" alt="Zobacz wiecej" /></a>&nbsp;&nbsp; --> </div> </div> </div> <!-- tutaj wklejalem reszte --> <div id="tab2" class="tab_content"> <!--Content--> <ul class="thumb"> <li><a class="slick-toggle" href="./index_pliki/max4.jpg" data-id="11"><img src="./index_pliki/min4.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max3.jpg" data-id="12"><img src="./index_pliki/min3.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max2.jpg" data-id="13"><img src="./index_pliki/min2.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max1.jpg" data-id="14"><img src="./index_pliki/min1.jpg" alt="" /></a></li> </ul> <div id="main_view2"> <a href="index.htm"><img src="index_pliki/max1.jpg" alt=""/></a> <small style="float: right; color: rgb(153, 153, 153);"> </small> </div> <div id="wiecej"> <div id="slickbox2"> <div id="someOtherID" class="slickbox2" data-id="11" style="display: none;"> 2.1 </div> <div id="someOtherID" class="slickbox2" data-id="12" style="display: none;"> 2.2 </div> <div id="someOtherID" class="slickbox2" data-id="13" style="display: none;"> 2.3 </div> <div id="someOtherID" class="slickbox2" data-id="14" style="display: none;"> 2.4 </div> </div> </div> </div> <div id="tab3" class="tab_content"> <ul class="thumb"> <li><a class="slick-toggle" href="./index_pliki/max4.jpg" data-id="21"><img src="./index_pliki/min4.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max3.jpg" data-id="22"><img src="./index_pliki/min3.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max2.jpg" data-id="23"><img src="./index_pliki/min2.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max1.jpg" data-id="24"><img src="./index_pliki/min1.jpg" alt="" /></a></li> </ul> <div id="main_view3"> <a href="index.htm"><img src="index_pliki/max1.jpg" alt=""/></a> <small style="float: right; color: rgb(153, 153, 153);"> </small> </div> <div id="wiecej"> <div id="slickbox3"> <div id="someOtherID" class="slickbox3" data-id="21" style="display: none;"> 3.1 </div> <div id="someOtherID" class="slickbox3" data-id="22" style="display: none;"> 3.2 </div> <div id="someOtherID" class="slickbox3" data-id="23" style="display: none;"> 3.3 </div> <div id="someOtherID" class="slickbox3" data-id="24" style="display: none;"> 3.4 </div> </div> </div> </div> <div id="tab4" class="tab_content"> <ul class="thumb"> <li><a class="slick-toggle" href="./index_pliki/max4.jpg" data-id="31"><img src="./index_pliki/min4.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max3.jpg" data-id="32"><img src="./index_pliki/min3.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max2.jpg" data-id="33"><img src="./index_pliki/min2.jpg" alt="" /></a></li> <li><a class="slick-toggle" href="./index_pliki/max1.jpg" data-id="34"><img src="./index_pliki/min1.jpg" alt="" /></a></li> </ul> <div id="main_view4"> <a href="index.htm"><img src="index_pliki/max1.jpg" alt=""/></a> <small style="float: right; color: rgb(153, 153, 153);"> </small> </div> <div id="wiecej"> <div id="slickbox4"> <div id="someOtherID" class="slickbox4" data-id="31" style="display: none;"> 4.1 </div> <div id="someOtherID" class="slickbox4" data-id="32" style="display: none;"> 4.2 </div> <div id="someOtherID" class="slickbox4" data-id="33" style="display: none;"> 4.3 </div> <div id="someOtherID" class="slickbox4" data-id="34" style="display: none;"> 4.4 </div> </div> </div> </div> </div> </div>

    Read the article

  • JQuery Menu plugins under ASP.NET MVC seem to only work in Chrome, but not in IE & FireFox

    - by Antony
    Recently, I was trying to prototype some jQuery-based menu into ASP.NET MVC. Just to name two examples here: plugins.jquery.com/project/columnview www.filamentgroup.com/lab/jquery_ipod_style_and_flyout_menus/ Their demo page looks great, but when I integrate their sample code into MVC, the script no longer works in IE and FireFox, but it seems to work just fine under Google Chrome. Can someone kindly enough to point out what I missed? I will be honest here. I am still new to JavaScript, so it is still a learning phase to me, so any help is highly appreciated. I have placed a copy of my VS2010 solution zip file @ http://db.tt/0UNDkN Here is what I did. In the Site.Master, I have something like <body> <div class="page">{truncated...}</div> <script src="http://code.jquery.com/jquery-1.4.2.min.js" type="text/javascript" charset="utf-8"></script> <asp:ContentPlaceHolder ID="ScriptContent" runat="server" /> </body> And inside View file, I have the following <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div id="original"> {some demo block, copied from javascript demo} </div> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ScriptContent" runat="server"> <script type="text/javascript" src="<%= Url.Content("~/Scripts/jquery.columnview.js") %>" /> <script type="text/javascript"> $(document).ready(function () { $('#original').columnview(); }); </script> </asp:Content> Compiled the code and ran it under IE. Ideally, it should work like the demo in www.christianyates.com/blog/jquery/finder-column-view-hierarchical-lists-jquery, but in reality, it only displays unordered list in plain view. (If you download the solution file and run it, you should be able to repro this as well). Next, tried with FireFox, not working either, same result as IE. Finally, when I try it under Google Chrome 4.1 (lastest version), and the script displays just fine. Really puzzling here :-/ Thank you for reading :D

    Read the article

  • How to "enable" HTML5 elements in IE that were inserted by AJAX call?

    - by Gidon
    IE does not work good with unknown elements (ie. HTML5 elements), one cannot style them , or access most of their props. Their are numerous work arounds for this for example: http://remysharp.com/2009/01/07/html5-enabling-script/ The problem is that this works great for static HTML that was available on page load, but when one creates HTML5 elements afterward (for example AJAX call containing them, or simply creating with JS), it will mark these newly added elements them as HTMLUnknownElement as supposed to HTMLGenericElement (in IE debugger). Does anybody know a work around for that, so that newly added elements will be recognized/enabled by IE? Here is a test page: <html><head><title>TIME TEST</title> <!--[if IE]> <script src="http://html5shiv.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> </head> <body> <time>some time</time> <hr> <script type="text/javascript"> $("time").text("WORKS GREAT"); $("body").append("<time>NEW ELEMENT</time>"); //simulates AJAX callback insertion $("time").text("UPDATE"); </script> </body> </html> In IE you will see the: UPDATE , and NEW ELEMENT. In any other modern browser you will see UPDATE, and UPDATE Solution Using the answer provided I came up with the following piece of javascript to HTML5 enable a whole bunch of elements returned by my ajax call: (function ($) { jQuery.fn.html5Enable = function () { if ($.browser.msie) { $("abbr, article, aside, audio, canvas, details, figcaption, figure, footer, header, hgroup, mark, menu, meter, nav, output, progress, section, summary, time, video", this).replaceWith(function () { if (this.tagName == undefined) return ""; var el = $(document.createElement(this.tagName)); for (var i = 0; i < this.attributes.length; i++) el.attr(this.attributes[i].nodeName, this.attributes[i].nodeValue); el.html(this.innerHtml); return el; }); } return this; }; })(jQuery); Now this can be called whenever you want to append something: var el = $(AJAX_RESULT_OR_HTML_STRING); el.html5Enable(); $("SOMECONTAINER").append(el); See http://code.google.com/p/html5shiv/issues/detail?id=4 for an explanation about what this plugin doesn't do.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >