Search Results

Search found 15021 results on 601 pages for 'location aware'.

Page 105/601 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Key-Value Pair Databases and Document Databases in the Big Data Story. In this article we will understand the role of Columnar, Graph and Spatial Database supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (The day before yesterday’s post) NoSQL Databases (The day before yesterday’s post) Key-Value Pair Databases (Yesterday’s post) Document Databases (Yesterday’s post) Columnar Databases (Tomorrow’s post) Graph Databases (Today’s post) Spatial Databases (Today’s post) Columnar Databases  Relational Database is a row store database or a row oriented database. Columnar databases are column oriented or column store databases. As we discussed earlier in Big Data we have different kinds of data and we need to store different kinds of data in the database. When we have columnar database it is very easy to do so as we can just add a new column to the columnar database. HBase is one of the most popular columnar databases. It uses Hadoop file system and MapReduce for its core data storage. However, remember this is not a good solution for every application. This is particularly good for the database where there is high volume incremental data is gathered and processed. Graph Databases For a highly interconnected data it is suitable to use Graph Database. This database has node relationship structure. Nodes and relationships contain a Key Value Pair where data is stored. The major advantage of this database is that it supports faster navigation among various relationships. For example, Facebook uses a graph database to list and demonstrate various relationships between users. Neo4J is one of the most popular open source graph database. One of the major dis-advantage of the Graph Database is that it is not possible to self-reference (self joins in the RDBMS terms) and there might be real world scenarios where this might be required and graph database does not support it. Spatial Databases  We all use Foursquare, Google+ as well Facebook Check-ins for location aware check-ins. All the location aware applications figure out the position of the phone with the help of Global Positioning System (GPS). Think about it, so many different users at different location in the world and checking-in all together. Additionally, the applications now feature reach and users are demanding more and more information from them, for example like movies, coffee shop or places see. They are all running with the help of Spatial Databases. Spatial data are standardize by the Open Geospatial Consortium known as OGC. Spatial data helps answering many interesting questions like “Distance between two locations, area of interesting places etc.” When we think of it, it is very clear that handing spatial data and returning meaningful result is one big task when there are millions of users moving dynamically from one place to another place & requesting various spatial information. PostGIS/OpenGIS suite is very popular spatial database. It runs as a layer implementation on the RDBMS PostgreSQL. This makes it totally unique as it offers best from both the worlds. Courtesy: mushroom network Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Hive. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Squibbly: LibreOffice Integration Framework for the Java Desktop

    - by Geertjan
    Squibbly is a new framework for Java desktop applications that need to integrate with LibreOffice, or more generally, need office features as part of a Java desktop solution that could include, for example, JavaFX components. Here's what it looks like, right now, on Ubuntu 13.04: Why is the framework called Squibbly? Because I needed a unique-ish name, because "squibble" sounds a bit like "scribble" (which is what one does with text documents, etc), and because of the many absurd definitions in the Urban Dictionary for the apparently real word "squibble", e.g., "A name for someone who is squibblish in nature." And, another e.g., "A squibble is a small squabble. A squabble is a little skirmish." But the real reason is the first definition (and definitely not the fourth definition): "Taking a small portion of another persons something, such as a small hit off of a pipe, a bite of food, a sip of a drink, or drag of a cigarette." In other words, I took (or "squibbled") a small portion of LibreOffice, i.e., OfficeBean, and integrated it into a NetBeans Platform application. Now anyone can add new features to it, to do anything they need, such as create a legislative software system as Propylon has done with their own solution on the NetBeans Platform: For me, the starting point was Chuk Munn Lee's similar solution from some years ago. However, he uses reflection a lot in that solution, because he didn't want to bundle the related JARs with the application. I understand that benefit but I find it even more beneficial to not need to require the user to specify the location of the LibreOffice location, since all the necessary JARs and native libraries (currently 32-bit Linux only, by the way) are bundled with the application. Plus, hundreds of lines of reflection code, as in Chuk's solution, is not fun to work with at all. Switching between applications is done like this: It's a work in progress, a proof of concept only. Just the result of a few hours of work to get the basic integration to work. Several problems remain, some of them potentially unsolvable, starting with these, but others will be added here as I identify them: Window management problems. I'd like to let the user have multiple LibreOffice applications and documents open at the same time, each in a new TopComponent. However, I haven't figured out how to do that. Right now, each application is opened into the same TopComponent, replacing the currently open application. I don't know the OfficeBean API well enough, e.g., should a single OfficeBean be shared among multiple TopComponents or should each of them have their own instance of it? Focus problems. When putting the application behind other applications and then switching back to the application, typing text becomes impossible. When closing a TopComponent and reopening it, the content is lost completely. Somehow the loss of focus, and then the return of focus, disables something. No idea how to fix that. The project is checked into this location, which isn't public yet, so you can't access it yet. Once it's publicly available, it would be great to get some code contributions and tweaks, etc. https://java.net/projects/squibbly Here's the source structure, showing especially how the OfficeBean JARs and native libraries (currently for Linux 32-bit only) fit in: Ultimately, would be cool to integrate or share code with http://joeffice.com!

    Read the article

  • SSMS Tools Pack 2.7 is released. New website, improved licensing and features.

    - by Mladen Prajdic
    New website Nice, isn't it? Cleaner, simpler, better looking and more modern. If you have any suggestions for further improvements I'd be glad to hear them. Simpler licensing With SSMS tools Pack 2.7 the licensing is finally where it should be. It is now based on the activate/deactivate model. This way you can move a license from machine to machine with simple deactivation on one and reactivation on another machine. Much better, no? Because of very good feedback I have added an option for 6 machines and lowered the 4 machines option to 3 machines. This should make it much simpler for you to choose the right option for yourself. Improved features Version 2.5.3 was already extremely stable and 2.7 continues with that tradition. Because of that I could fully focus on features and why 3.0 will rock even more that 2.7! ;) In version 2.7 I have addressed quite a few improvements you were requesting for a while now. SQL History This is probably the biggest time saver out there, therefore it's only fair it gets a few important updates. If you have an existing .sql file opened, the Window Content History now saves your code to that existing file and also makes a backup in the SQL History log default location. Search is still done through the SQL History log but the Tab Sessions Restore opens your existing .sql file. This way you don't have to remember to save your existing files by yourself anymore. A bug when you couldn't search properly if you copied the log files to a new location was fixed. Unfortunately this removed the option to filter a search with the time component. The smallest search interval is now one day. The SSMS Tools Pack now remembers the visibility of the Current Window History window when you exit SSMS. SQL Snippets You can now set the position of the cursor in your snippets by placing {C} somewhere in your snippet. It's a small improvement but can be a huge time saver since you don't have to move through the snippet to the desired location anymore. Run script on multiple databases Database choices can now be saved with a name and then loaded again next time. You can also choose to run the script in a new window for each chosen database. Search through grid results You can now go previous/next search result with the Prev/Next control inside the search window. This is extremely useful if you have a large resultset. IT saves you the scrolling. CRUD generator Four new variables have been added: |CurrentDate| writes current date in format yyyy-MM-dd to your script |CurrentTime| writes current time in 24h format HH:mm:ss to your script |CurrentWinUser| writes current Windows logged on user to your script |CurrentSqlUser| writes current SQL logged on login to your script This was actually quite a requested feature so if you have any other ideas for extra variables, do let me know. That's about it. I hope you're going to enjoy this version as much as the previous ones. Have fun!

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Manually setting breakpoints in WinDBG

    - by chris
    I am trying to examine the assembly for an executable using WinDBG, but I am having a hard time getting to it. I want to set a breakpoint at the first instruction in my program, but when I try to do that manually (using the address of the module), WinDBG tells me that it is "unable to insert breakpoint" at that location due to an "Invalid access to memory location." I notice that when I create a breakpoint through the source code GUI, the address is not the same as the first part of my module (In my example: "Win32FileOpen", a simple program I wrote.) Is there a header of some sort that requires adding an offset to the address of my module? In another question, I saw the suggestion: "I would attempt to calculate the breakpoint address as: Module start + code start + code offset" but was unsure where to obtain those values. Can somebody please elaborate on this? The reason I don't just use the source GUI is that I want to be able to do this with a program that I may not have the source/symbols for. If there is an easier way to immediately start working with the executable I open, please let me know. (e.g. Opening an .exe Olly immediately shows me the assembly for that .exe, searching for referenced strings gives me results from that module, etc. WinDBG seems to start me off in ntdll.dll, which is not usually useful for me.) 0:000> lm start end module name 00000000`00130000 00000000`0014b000 Win32FileOpen C (private pdb symbols) C:\cfinley\code\Win32FileOpen\Debug\Win32FileOpen.pdb 00000000`73bd0000 00000000`73c2c000 wow64win (deferred) 00000000`73c30000 00000000`73c6f000 wow64 (deferred) 00000000`74fe0000 00000000`74fe8000 wow64cpu (deferred) 00000000`77750000 00000000`778f9000 ntdll (pdb symbols) c:\symbols\mssymbols\ntdll.pdb\15EB43E23B12409C84E3CC7635BAF5A32\ntdll.pdb 00000000`77930000 00000000`77ab0000 ntdll32 (deferred) 0:000> bu 00000000`00130000 0:000> bl 0 e x86 00000000`001413a0 0001 (0001) 0:**** Win32FileOpen!main <-- One that is generated via GUI 1 e x86 00000000`00130000 0001 (0001) 0:**** Win32FileOpen!__ImageBase <-- One I tried to set manually 0:000> g Unable to insert breakpoint 1 at 00000000`00130000, Win32 error 0n998 "Invalid access to memory location." bp1 at 00000000`00130000 failed WaitForEvent failed ntdll!LdrpDoDebuggerBreak+0x31: 00000000`777fcb61 eb00 jmp ntdll!LdrpDoDebuggerBreak+0x33 (00000000`777fcb63)

    Read the article

  • shouldStartLoadWithRequest is not called when using AJAX/XMLHttpRequest

    - by el_migu_el
    Hi, I am trying to send method invocations from JavaScript to Objective-C and vice versa. Everything works fine for window.location triggered urls, which are catched by shouldStartLoadWithRequest. Now if I try to use an AJAX call instead, shouldStartLoadWithRequest is not called. Is there a way to do this? Mainly I do not want to be restricted to the max URL size on data that can be passed from JavaScript to Objective-C. My UIWebViewDelegate implements: - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { NSString *url = [[request URL] absoluteString]; NSRange urlrange = [url rangeOfString:@"myScheme://"]; if(urlrange.length > 0){ NSLog(@"this is an objective-c call, do not load link: %@", [url substringWithRange:NSMakeRange(urlrange.location, [url length])] ); return NO; } else { NSLog(@"not an objective-c call, load link: ", url ); return YES; } } My JavaScript calls: // works window.location.href = "myScheme://readyHref"; // fails var xmlHttpReq = false; if (window.XMLHttpRequest) { xmlHttpReq = new XMLHttpRequest(); } else if (window.ActiveXObject) { xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP"); } xmlHttpReq.open('GET', "myScheme://readyAJAX", false); xmlHttpReq.send();

    Read the article

  • VISUAL STUDIO 2008 SETUP PROJECT MSI BUILD with Bootstrapping for quite installation

    - by rajadiga
    I build Visual Studio 2008 setup Project with MSI build it depends on .NET 3.5. I added Prerequisites like: .NET 3.5, Microsoft office interoperability, VS tools for office System 3.0 Run time, .etc. After that Selected "Download Prerequisite from Same location as my application" in Specify install location for Prerequisite. Build the setup. I can find mysetup.msi in Release directory. In new Machine I started fresh installation of my application... While Clicking the mySetup.msi. Dialog shows like this " This Setup Requires .NET framework 3.5 , Please install .NET setup then run this setup, .NET Framework can be obtained from web Do you want to do that now?" it gives "yes" no option - if I press YES it goes microsoft website. How can avoid it ? I wanted setup take .NET Framework to be installed from same location where I put all setup files including mysetup.msi ? In case of Quite installation cmd /c "msiexec /package mysetup.msi /quiet /log install.log" ..in log I can see only half way through installtion then error Property(S): HideFatalErrorForm = TRUE MSI (s) (D0:24) [00:07:08:015]: Product: my product-- Installation failed. === Logging stopped: 3/23/2010 0:07:08 === so how can complete the installation without user intervention and without error using VS2008 setup project thanks for all the help in advance for any input.

    Read the article

  • Standard Apache (not OHS) with mod_osso for Single Signon

    - by Markos Fragkakis
    The mod_osso.so (the Apache plugin for Single Signon, provided by Oracle) is distributed with the Oracle HTTP Server (OHS), which is essentially a modified Apache. I am trying to use it on the standard Apache HTTP Server, and have not managed to get it to work. Configuration: Apache 2.2.15 OHS from the Oracle Web Tier Tools 11.1.1.2.0 Red Hat Linux 64 bit I have: Included the module in the modules directory (copied from corresponding modules dir in OHS) Included the libraries libiau.so and libclutsh.so.11.1 from Oracle Home. The absence of these libraries produced an error on starting Apache. Produced a osso.conf using the ssoreg.sh tool provided with OID (the LDAP implementation of Oracle) Created the required mod_osso.conf file, which I included in httpd.conf. The error I get when starting Apache is this: # /opt/apache_sso/bin/apachectl -k start httpd: Syntax error on line 1075 of /opt/apache_sso/conf/httpd.conf: Syntax error on line 1 of /opt/apache_sso/conf/mod_osso.conf: Cannot load /opt/apache_sso/modules/mod_osso.so into server: /opt/apache_sso/modules/mod_osso.so: undefined symbol: _audit_authentication_request My mod_osso.conf: # cat /opt/apache_sso/conf/mod_osso.conf LoadModule osso_module modules/mod_osso.so <IfModule mod_osso.c> OssoIdleTimeout off OssoIpCheck on OssoConfigFile conf/osso.conf #Location is the URI you want to protect <Location /myapp> require valid-user #OHS 11g AuthType Osso #OHS 10g AuthType Basic AuthType Osso </Location> </IfModule> Has anyone made mod_osso work on standard Apache HTTP server?

    Read the article

  • Javascript/Greasemonkey: search for something then set result as a value

    - by thewinchester
    Ok, I'm a bit of a n00b when it comes to JS (I'm not the greatest programmer) so please be gentle - specially if my questions been asked already somewhere and I'm too stupid to find the right answer. Self deprecation out of the way, let's get to the question. Problem There is a site me and a large group of friends frequently use which doesn't display all the information we may like to know - in this case an airline bookings site and the class of travel. While the information is buried in the code of the page, it isn't displayed anywhere to the user. Using a Greasemonkey script, I'd like to liberate this piece of information and display it in a suitable format. Here's the psuedocode of what I'm looking to do. Search dom for specified element define variables Find a string of text If found Set result to a variable Write contents to page at a specific location (before a specified div) If not found Do nothing I think I've achieved most of it so far, except for the key bits of: Searching for the string: The page needs to search for the following piece of text in the page HEAD: mileageRequest += "&CLASSES=S,S-S,S-S"; The Content I need to extract and store is between the second equals (=) sign and the last comma ("). The contents of this area can be any letter between A-Z. I'm not fussed about splitting it up into an array so I could use the elements individually at this stage. Writing the result to a location: Taking that found piece of text and writing it to another location. Code so far This is what I've come up so far, with bits missing highlighted. buttons = document.getElementById('buttons'); ''Search goes here var flightClasses = document.createElement("div"); flightClasses.innerHTML = '<div id="flightClasses"> ' + '<h2>Travel classes</h2>' + 'For the above segments, your flight classes are as follows:' + 'write result here' + '</div>'; main.parentNode.insertBefore(flightClasses, buttons); If anyone could help me, or point me in the right direction to finish this off I'd appreciate it.

    Read the article

  • Task failed because "sgen.exe" was not found

    - by Kinze
    I am getting the following error when attempting to build my project in Visual Studio 2008 Professional Edition: *Task failed because “sgen.exe” was not found, or the correct Microsoft Windows SDK is not installed. The task is looking for “sgen.exe” in the “bin” subdirectory beneath the location specified in the InstallationFolder value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v6.0A. You may be able to solve the problem by doing one of the following: 1) Install the Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5. 2) Install Visual Studio 2008. 3) Manually set the above registry key to the correct location. 4) Pass the correct location into the “ToolPath” parameter of the task.* I tried downloading Microsoft Windows SDK for Windows Server 2008 and .NET Framework 3.5, and still get the error. I also tried downloading the Windows 7 SDK and .NET Framework 3.5 and still same result. I also tried to manually editing the registry to change the InstallationFolder and I tried repairing the Visual Studio install. The project was originally created on WinXP and I am trying to compile on a reformatted machine running Windows 7 Enterprise.

    Read the article

  • How to catch an expected (and intended) 302 response code with generic XmlHttpRequest?

    - by Anthony
    So, if you look back at my previous question about Exchange Autodiscover, you'll see that the easiet way to get the autodiscover URL is to send a non-secure, non-authenticated GET request to the server, ala: http://autodiscover.exchangeserver.org/autodiscover/autodiscover.xml The server will respond with a 302 redirect with the correct url in the Location header. I'm trying out something really simple at first with a Chrome extension, where I have: if (req.readyState==4 && req.status==302) { return req.getResponseHeader("Location"); } With another ajax call set up with the full XML Post and the user credentials, But instead Chrome hangs at this point, and a look at the developer panel shows that it is not returning back the response but instead is acting like no response was given, meanwhile showing a Uncaught Error: NETWORK_ERR: XMLHttpRequest Exception 101 in the error log. The way I see it, refering to the exact response status is about the same as "catching" it, but I'm not sure if the problem is with Chrome/WebKit or if this is how XHR requests always handle redirects. I'm not sure how to catch this so that I can get still get the headers from the response. Or would it be possible to set up a secondary XHR such that when it gets the 302, it sends a totally different request? Quick Update I just changed it so that it doesn't check the response code: if (req.readyState==4) { return req.getResponseHeader("Location"); } and instead when I alert out the value it's null. and there is still the same error and no response in the dev console. SO it seems like it either doesn't track 302 responses as responses, or something happens after that wipes that response out?

    Read the article

  • Debugging Cactus Tests in Eclipse

    - by Th3sandm4n
    Side note: This is inherited code, I didn't do any of the setup and am new to the project. I'm trying to set up remote debugging in Eclipse for these unit tests that use Cactus. I've read around a bit (but I can't seem to find any REAL information how to set this up). Closest I've found is here (http://www.eclipse.org/webtools/community/tutorials/CactusInWTP/CactusInWTP.html), but it just says to Debug - Debug on Server, but nowhere does it say where the debug port is set or anything, and I can't find anything on how to enable this, set it. Just asking to see if anyone has set this up before, it would really help stepping through the code rather than just logging. The plugin (http://jakarta.apache.org/cactus/integration/eclipse/runner_plugin.html) Looks promising, but I also don't even know where to download it, it doesn't link to a location -.- The project uses ant, cactus, and I'm using Eclipse. Thanks EDIT Here is the target I'm using <junit fork="no" forkmode="perTest" printsummary="yes" haltonfailure="no" haltonerror="no" failureproperty="tests.failed"> <jvmarg value="-Xdebug" /> <jvmarg value="-Xrunjdwp:transport=dt_socket,address=localhost:8005,server=y,suspend=y" /> <formatter type="xml" usefile="true" /> <formatter type="plain" usefile="false" /> <classpath> <pathelement location="${clover.jar}"/> <path refid="cactus.classpath.id" /> <pathelement location="../ejb/src" /> </classpath> <sysproperty key="cactus.contextURL" value="${cactus.contextURL}"/> <test name="com.test.AllTests" outfile="TESTS" /> </junit>

    Read the article

  • For Nvarchar(Max) I am only getting 4000 characters in TSQL?

    - by Malcolm
    Hi, This is for SS 2005. Why I am i only getting 4000 characters and not 8000? It truncates the string @SQL1 at 4000. ALTER PROCEDURE sp_AlloctionReport( @where NVARCHAR(1000), @alldate NVARCHAR(200), @alldateprevweek NVARCHAR(200)) AS DECLARE @SQL1 NVARCHAR(Max) SET @SQL1 = 'SELECT DISTINCT VenueInfo.VenueID, VenueInfo.VenueName, VenuePanels.PanelID, VenueInfo.CompanyName, VenuePanels.ProductCode, VenuePanels.MF, VenueInfo.Address1, VenueInfo.Address2, '' As AllocationDate, '' As AbbreviationCode, VenueInfo.Suburb, VenueInfo.Route, VenueInfo.ContactFirstName, VenueInfo.ContactLastName, VenueInfo.SuitableTime, VenueInfo.OldVenueName, VenueCategories.Category, VenueInfo.Phone, VenuePanels.Location, VenuePanels.Comment, [VenueCategories].[Category] + '' Allocations'' AS ReportHeader, ljs.AbbreviationCode AS PrevWeekCampaign FROM (((VenueInfo INNER JOIN VenuePanels ON VenueInfo.VenueID = VenuePanels.VenueID) INNER JOIN VenueCategories ON VenueInfo.CategoryID = VenueCategories.CategoryID) LEFT JOIN (SELECT CampaignProductions.AbbreviationCode, VenuePanels.PanelID, CampaignAllocations.AllocationDate FROM (((VenueInfo INNER JOIN VenuePanels ON VenueInfo.VenueID=VenuePanels.VenueID) INNER JOIN CampaignAllocations ON VenuePanels.PanelID=CampaignAllocations.PanelID) INNER JOIN CampaignProductions ON CampaignAllocations.CampaignID=CampaignProductions.CampaignID) INNER JOIN VenueCategories ON VenueInfo.CategoryID=VenueCategories.CategoryID WHERE ' + @alldateprevweek + ') ljs ON VenuePanels.PanelID = ljs.PanelID) INNER JOIN (SELECT VenueInfo.VenueID, VenuePanels.PanelID, VenueInfo.VenueName, VenueInfo.CompanyName, VenuePanels.ProductCode, VenuePanels.MF, VenueInfo.Address1, VenueInfo.Address2, CampaignAllocations.AllocationDate, CampaignProductions.AbbreviationCode, VenueInfo.Suburb, VenueInfo.Route, VenueInfo.ContactFirstName, VenueInfo.ContactLastName, VenueInfo.SuitableTime, VenueInfo.OldVenueName, VenueCategories.Category, VenueInfo.Phone, VenuePanels.Location, VenuePanels.Comment, [Category] + '' Allocations'' AS ReportHeader, ljs2.AbbreviationCode AS PrevWeekCampaign FROM ((((VenueInfo INNER JOIN VenuePanels ON VenueInfo.VenueID = VenuePanels.VenueID) INNER JOIN CampaignAllocations ON VenuePanels.PanelID = CampaignAllocations.PanelID) INNER JOIN CampaignProductions ON CampaignAllocations.CampaignID = CampaignProductions.CampaignID) INNER JOIN VenueCategories ON VenueInfo.CategoryID = VenueCategories.CategoryID) LEFT JOIN (SELECT CampaignProductions.AbbreviationCode, VenuePanels.PanelID, CampaignAllocations.AllocationDate FROM (((VenueInfo INNER JOIN VenuePanels ON VenueInfo.VenueID=VenuePanels.VenueID) INNER JOIN CampaignAllocations ON VenuePanels.PanelID=CampaignAllocations.PanelID) INNER JOIN CampaignProductions ON CampaignAllocations.CampaignID=CampaignProductions.CampaignID) INNER JOIN VenueCategories ON VenueInfo.CategoryID=VenueCategories.CategoryID WHERE ' + @alldateprevweek + ') ljs2 ON VenuePanels.PanelID = ljs2.PanelID WHERE ' + @alldate + ' AND ' + @where + ') ljs3 ON VenueInfo.VenueID = ljs3.VenueID WHERE (((VenuePanels.PanelID)<>ljs3.[PanelID] And (VenuePanels.PanelID) Not In (SELECT PanelID FROM CampaignAllocations WHERE ' + @alldateprevweek + ')) AND ' + @where + ') UNION ALL SELECT VenueInfo.VenueID, VenueInfo.VenueName, VenuePanels.PanelID, VenueInfo.CompanyName, VenuePanels.ProductCode, VenuePanels.MF, VenueInfo.Address1, VenueInfo.Address2, CampaignAllocations.AllocationDate, CampaignProductions.AbbreviationCode, VenueInfo.Suburb, VenueInfo.Route, VenueInfo.ContactFirstName, VenueInfo.ContactLastName, VenueInfo.SuitableTime, VenueInfo.OldVenueName, VenueCategories.Category, VenueInfo.Phone, VenuePanels.Location, VenuePanels.Comment, [Category] + '' Allocations'' AS ReportHeader, ljs.AbbreviationCode AS PrevWeekCampaign FROM ((((VenueInfo INNER JOIN VenuePanels ON VenueInfo.VenueID = VenuePanels.VenueID) INNER JOIN CampaignAllocations ON VenuePanels.PanelID = CampaignAllocations.PanelID) INNER JOIN CampaignProductions ON CampaignAllocations.CampaignID = CampaignProductions.CampaignID) INNER JOIN VenueCategories ON VenueInfo.CategoryID = VenueCategories.CategoryID) LEFT JOIN (SELECT CampaignProductions.AbbreviationCode, VenuePanels.PanelID, CampaignAllocations.AllocationDate FROM (((VenueInfo INNER JOIN VenuePanels ON VenueInfo.VenueID=VenuePanels.VenueID) INNER JOIN CampaignAllocations ON VenuePanels.PanelID=CampaignAllocations.PanelID) INNER JOIN CampaignProductions ON CampaignAllocations.CampaignID=CampaignProductions.CampaignID) INNER JOIN VenueCategories ON VenueInfo.CategoryID=VenueCategories.CategoryID WHERE ' + @alldateprevweek + ') ljs ON VenuePanels.PanelID = ljs.PanelID WHERE ' + @alldate + ' AND ' + @where Select @SQL1

    Read the article

  • Objective C / iPhone comparing 2 CLLocations /GPS coordinates

    - by user289503
    have an app that finds your GPS location successfully, but I need to be able to compare that GPS with a list of GPS locations, if both are the same , then you get a bonus. I thought I had it working, but it seems not. I have 'newLocation' as the location where you are, I think the problem is that I need to be able to seperate the long and lat data of newLocation. So far ive tried this: Code: NSString *latitudeVar = [[NSString alloc] initWithFormat:@"%g°", newLocation.coordinate.latitude]; NSString *longitudeVar = [[NSString alloc] initWithFormat:@"%g°", newLocation.coordinate.longitude]; An example of the list of GPS locations: Code: location:(CLLocation*)newLocation; CLLocationCoordinate2D bonusOne; bonusOne.latitude = 37.331689; bonusOne.longitude = -122.030731; and then Code: if (latitudeVar == bonusOne.latitude && longitudeVar == bonusOne.longitude) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"infinite loop firday" message:@"infloop" delegate:nil cancelButtonTitle:@"Stinky" otherButtonTitles:nil ]; [alert show]; [alert release]; this comes up with an error 'invalid operands to binary == have strut NSstring and CLlocationDegrees' Any thoughts?

    Read the article

  • Why ComboBox hides cursor when DroppedDown is set?

    - by Ivan Danilov
    Let's create WinForms Application (I have Visual Studio 2008 running on Windows Vista, but it seems that described situation takes place almost everywhere from Win98 to Vista, on native or managed code). Write such code: using System; using System.Drawing; using System.Windows.Forms; namespace WindowsFormsApplication1 { public class Form1 : Form { private readonly Button button1 = new Button(); private readonly ComboBox comboBox1 = new ComboBox(); private readonly TextBox textBox1 = new TextBox(); public Form1() { SuspendLayout(); textBox1.Location = new Point(21, 51); button1.Location = new Point(146, 49); button1.Text = "button1"; button1.Click += button1_Click; comboBox1.Items.AddRange(new[] {"1", "2", "3", "4", "5", "6"}); comboBox1.Location = new Point(21, 93); AcceptButton = button1; Controls.AddRange(new Control[] {textBox1, comboBox1, button1}); Text = "Form1"; ResumeLayout(false); PerformLayout(); } private void button1_Click(object sender, EventArgs e) { comboBox1.DroppedDown = true; } } } Then, run app. Place mouse cursor on the form and don't touch mouse anymore. Start to type something in TextBox - cursor will hide because of it. When you press Enter key - event throws and ComboBox will be dropped down. But now cursor won't appear even if you move it! And appears only when you click somewhere. There I've found discussion of this problem. But there's no good solution... Any thoughts? :)

    Read the article

  • mod_rewrite to nginx rewrite rules

    - by Andrew Bestic
    I have converted most of my Apache HTTPd mod_rewrite rules over to nginx's HttpRewrite module (which calls PHP-FPM via FastCGI on every dynamic request). Simple rules which are defined by hard locations work fine: location = /favicon.ico { rewrite ^(.*)$ /_core/frontend.php?type=ico&file=include__favicon last; } I am still having trouble with regular expressions, which are parsed in mod_rewrite like this (note that I am accepting trailing slashes within the rules, as well as appending the query string to every request): mod_rewrite # File handler RewriteRule ^([a-z0-9-_,+=]+)\.([a-z]+)$ _core/frontend.php?type=$2&file=$1 [QSA,L] # Page handler RewriteRule ^([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1/$2 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1/$2 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)$ _core/frontend.php?route=$1/$2/$3 [QSA,L] RewriteRule ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/$ _core/frontend.php?route=$1/$2/$3 [QSA,L] I have come up with the following server configuration for the site, but I am met with unmatched rules after parsing a request (eg; GET /user/auth): attempted nginx rewrite location / { # File handler rewrite ^([a-z0-9-_,+=]+)\.([a-z]+)?(.*)$ /_core/frontend.php?type=$2&file=$1&$3 break; # Page handler rewrite ^([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1&$2 break; rewrite ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1/$2&$3 break; rewrite ^([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)\/([a-z0-9-_,+=]+)(\/*)?(.*)$ /_core/frontend.php?route=$1/$2/$3&$4 break; } What would you suggest for dealing with my File Handler (which is just filename.ext), and my Page Handler (which is a unique route request with up to 3 properties defined by a forward slash)? As I haven't gotten a response from this yet, I am also unsure if this will override my PHP parser which is defined with location ~ \.php {}, which is included before these rewrite rules. Bonus points if I can solve the parsing issues without the need to use a new rule for each number of route properties.

    Read the article

  • ASPX FormsAuthentication.RedirectFromLoginPage function is not working anymore

    - by Mike Webb
    Here is my issue. I have an ASPX web site and I have code in there to redirect from the login page with the call to "FormsAuthentication.RedirectFromLoginPage(username, false);" This sends the user from the root website folder to 'website/Admin/'. I have a 'default.aspx' page in 'website/Admin/' and the call to redirect works on a previous version of the website we have running currently, but the one that I am updating on a separate test server is not working. It gives me the error "Directory Listing Denied. This Virtual Directory does not allow contents to be listed." I have this in the config file: <authorization> <allow users="*" /> </authorization> under the "authentication" option and... <location path="Admin"> <system.web> <authorization> <deny users="?" /> </authorization> </system.web> </location> for the location of Admin. Also, there is no difference in the code between the web.config, Login.aspx, or the default.aspx files on the current server and the one on the test server, so I am confused as to why the redirect will not work on both. It even works in the Visual Studio server environment, for which the code is also identical. Any suggestions and help is appreciated.

    Read the article

  • Objective-C NSMutableDictionary Disappearing

    - by blackmage
    I am having this problem with the NSMutableDictionary where the values are not coming up. Snippets from my code look like this: //Data into the Hash and then into an array yellowPages = [[NSMutableArray alloc] init]; NSMutableDictionary *address1=[[NSMutableDictionary alloc] init]; [address1 setObject:@"213 Pheasant CT" forKey: @"Street"]; [address1 setObject:@"NC" forKey: @"State"]; [address1 setObject:@"Wilmington" forKey: @"City"]; [address1 setObject:@"28403" forKey: @"Zip"]; [address1 setObject:@"Residential" forKey: @"Type"]; [yellowPages addObject:address1]; NSMutableDictionary *address2=[[NSMutableDictionary alloc] init]; [address1 setObject:@"812 Pheasant CT" forKey: @"Street"]; [address1 setObject:@"NC" forKey: @"State"]; [address1 setObject:@"Wilmington" forKey: @"City"]; [address1 setObject:@"28403" forKey: @"Zip"]; [address1 setObject:@"Residential" forKey: @"Type"]; [yellowPages addObject:address2]; //Iterate through array pulling the hash and insert into Location Object for(int i=0; i<locationCount; i++){ NSMutableDictionary *anAddress=[theAddresses getYellowPageAddressByIndex:i]; //Set Data Members Location *addressLocation=[[Location alloc] init]; addressLocation.Street=[anAddress objectForKey:@"Street"]; locations[i]=addressLocation; NSLog(addressLocation.Street); } So the problem is only the second address is printed, the 813 and I can't figure out why. Can anyone offer any help?

    Read the article

  • RFC regarding WAM

    - by Noctis Skytower
    Request For Comment regarding Whitespace's Assembly Mnemonics What follows in a first generation attempt at creating mnemonics for a whitespace assembly language. STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint In the interest of making improvements to this small and simple instruction set, this is a second attempt. hold N Push the number onto the stack copy Duplicate the top item on the stack copy N Copy the nth item on the stack (given by the argument) onto the top of the stack swap Swap the top two items on the stack drop Discard the top item on the stack drop N Slide n items off the stack, keeping the top item add Addition sub Subtraction mul Multiplication div Integer Division mod Modulo save Store load Retrieve L: Mark a location in the program call L Call a subroutine goto L Jump unconditionally to a label if=0 L Jump to a label if the top of the stack is zero if<0 L Jump to a label if the top of the stack is negative return End a subroutine and transfer control back to the caller exit End the program print chr Output the character at the top of the stack print int Output the number at the top of the stack input chr Read a character and place it in the location given by the top of the stack input int Read a number and place it in the location given by the top of the stack What do you think of the following revised list for Whitespace's assembly instructions? I'm still thinking outside of the box somewhat and trying to come up with a better mnemonic set than last time. When the previous interpreter was written, it was completed over two contiguous, rushed evenings. This rewrite deserves significantly more time now that it is the summer. Of course, the next version of Whitespace (0.4) may have its instructions revised even more, but this is just a redesign of what originally was done in a very short amount of time. Hopefully, the instructions make more sense once someone new to programmings thinks about them.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >