Search Results

Search found 293 results on 12 pages for 'omit'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Sanitizing DB inputs with XSLT

    - by azathoth
    Hello I've been looking for a method to strip my XML content of apostrophes (') like: <name> Jim O'Connor</name> since my DBMS is complaining of receiving those. By looking at the example described here, that is supposed to replace ' with '', I constructed the following script: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes" /> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*" /> </xsl:copy> </xsl:template> <xsl:template name="sqlApostrophe"> <xsl:param name="string" /> <xsl:variable name="apostrophe">'</xsl:variable> <xsl:choose> <xsl:when test="contains($string,$apostrophe)"> <xsl:value-of select="concat(substring-before($string,$apostrophe), $apostrophe,$apostrophe)" disable-output-escaping="yes" /> <xsl:call-template name="sqlApostrophe"> <xsl:with-param name="string" select="substring-after($string,$apostrophe)" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$string" disable-output-escaping="yes" /> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="node()|@*"> <xsl:apply-templates name="sqlApostrophe"/> </xsl:template> </xsl:stylesheet> However, the processor isn't accepting it. What am I missing here? Is there a better way to get rid of the apostrophes? Perhaps another approach for sanitizing DB inputs by using XSLT? Thanks for your help

    Read the article

  • (mySQL) Unable to query 2 tables properly for data

    - by Devner
    I have 2 tables. One is 'page_links' and the other is 'rpp'. Table page_links is the superset of table rpp. The following is the schema of my tables: -- Table structure for table `page_links` -- CREATE TABLE IF NOT EXISTS `page_links` ( `page` varchar(255) NOT NULL, `page_link` varchar(100) NOT NULL, `heading_id` tinyint(3) unsigned NOT NULL, PRIMARY KEY (`page`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `page_links` -- INSERT INTO `page_links` (`page`, `page_link`, `heading_id`) VALUES ('a1.php', 'A1', 8), ('b1.php', 'B1', 8), ('c1.php', 'C1', 5), ('d1.php', 'D1', 5), ('e1.php', 'E1', 8), ('f1.php', 'F1', 8), ('g1.php', 'G1', 8), ('h1.php', 'H1', 1), ('i1.php', 'I1', 1), ('j1.php', 'J1', 8), ('k1.php', 'K1', 8), ('l1.php', 'L1', 8), ('m1.php', 'M1', 8), ('n1.php', 'N1', 8), ('o1.php', 'O1', 8), ('p1.php', 'P1', 4), ('q1.php', 'Q1', 5), ('r1.php', 'R1', 4); -- Table structure for table `rpp` -- CREATE TABLE IF NOT EXISTS `rpp` ( `role_id` tinyint(3) unsigned NOT NULL, `page` varchar(255) NOT NULL, `is_allowed` tinyint(1) NOT NULL, PRIMARY KEY (`role_id`,`page`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `rpp` -- INSERT INTO `rpp` (`role_id`, `page`, `is_allowed`) VALUES (3, 'a1.php', 1), (3, 'b1.php', 1), (3, 'c1.php', 1), (3, 'd1.php', 1), (3, 'e1.php', 1), (3, 'f1.php', 1), (3, 'h1.php', 1), (3, 'i1.php', 1), (3, 'l1.php', 1), (3, 'm1.php', 1), (3, 'n1.php', 1), (4, 'a1.php', 1), (4, 'b1.php', 1), (4, 'q1.php', 1), (5, 'r1.php', 1); WHAT I AM TRYING TO DO: I am trying to query both the above tables (in a single query) in such a way that all the pages from page_links are displayed along with the is_allowed value from rpp for a particular role. For example, I want to get the is_allowed value of all the pages from rpp for role_id = 3 and at the same time, list all the available pages from page_links. A clear example of my expected result would be: page is_allowed role_id ---------------------------------------- a1.php 1 3 b1.php 1 3 c1.php 1 3 d1.php 1 3 e1.php 1 3 f1.php 1 3 g1.php NULL NULL h1.php 1 3 i1.php 1 3 j1.php NULL NULL k1.php NULL NULL l1.php 1 3 m1.php 1 3 n1.php 1 3 o1.php NULL NULL p1.php NULL NULL q1.php NULL NULL r1.php NULL NULL One more example of my desired result could be achieved by doing a LEFT JOIN rpp ON page_links.page = rpp.page but we need to omit using role_id = 3 (or any value) to be able to get that. But I do want to specify the role_id as well and get the results. I need the query to be able to get this result. I would appreciate any replies that could help me with this. If you can suggest me any changes as well to the table(s) design to be able to achieve the desired result, that's good as well. Thanks in advance.

    Read the article

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • How to write my own global lock / unlock functions for PostgreSQL

    - by rafalmag
    I have postgresql (in perlu) function getTravelTime(integer, timestamp), which tries to select data for specified ID and timestamp. If there are no data or if data is old, it downloads them from external server (downloading time ~300ms). Multiple process use this database and this function. There is an error when two process do not find data and download them and try to do an insert to travel_time table (id and timestamp pair have to be unique). I thought about locks. Locking whole table would block all processes and allow only one to proceed. I need to lock only on id and timestamp. pg_advisory_lock seems to lock only in "current session". But my processes uses their own sessions. I tried to write my own lock/unlock functions. Am I doing it right? I use active waiting, how can I omit this? Maybe there is a way to use pg_advisory_lock() as global lock? My code: CREATE TABLE travel_time_locks ( id_key integer NOT NULL, time_key timestamp without time zone NOT NULL, UNIQUE (id_key, time_key) ); ------------ -- Function: mylock(integer, timestamp) DROP FUNCTION IF EXISTS mylock(integer, timestamp) CASCADE; -- Usage: SELECT mylock(1, '2010-03-28T19:45'); -- function tries to do a global lock similar to pg_advisory_lock(key, key) CREATE OR REPLACE FUNCTION mylock(id_input integer, time_input timestamp) RETURNS void AS $BODY$ DECLARE rows int; BEGIN LOOP BEGIN -- active waiting here !!!! :( INSERT INTO travel_time_locks (id_key, time_key) VALUES (id_input, time_input); EXCEPTION WHEN unique_violation THEN CONTINUE; END; EXIT; END LOOP; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1; ------------ -- Function: myunlock(integer, timestamp) DROP FUNCTION IF EXISTS myunlock(integer, timestamp) CASCADE; -- Usage: SELECT myunlock(1, '2010-03-28T19:45'); -- function tries to do a global unlock similar to pg_advisory_unlock(key, key) CREATE OR REPLACE FUNCTION myunlock(id_input integer, time_input timestamp) RETURNS integer AS $BODY$ DECLARE BEGIN DELETE FROM ONLY travel_time_locks WHERE id_key=id_input AND time_key=time_input; RETURN 1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 1;

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • Nvidia drivers don't work with mainline kernel

    - by dutchie
    I want to try some of the new features in the btrfs filesystem, and to do that I need to use a newer kernel than is included in Ubuntu 12.04. To do that, I have installed linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb, linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb, and linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb from the mainline kernel download here. However, on rebooting into the 3.4 kernel, my desktop is stuck at a very low resolution and I cannot increase it to the full. This did happen when I first installed, but a simple install of the nvidia-current package got everything working nicely with my GTX570 card. There were appear to be some DKMS errors when I installed the kernel, and they indicated I should look at /var/lib/dkms/nvidia-current/295.40/build/make.log: josh@sirius:~/Downloads$ sudo dpkg -i linux-*.deb Selecting previously unselected package linux-headers-3.4.0-030400. (Reading database ... 309400 files and directories currently installed.) Unpacking linux-headers-3.4.0-030400 (from linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb) ... Selecting previously unselected package linux-headers-3.4.0-030400-generic. Unpacking linux-headers-3.4.0-030400-generic (from linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Selecting previously unselected package linux-image-3.4.0-030400-generic. Unpacking linux-image-3.4.0-030400-generic (from linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Done. Setting up linux-headers-3.4.0-030400 (3.4.0-030400.201205210521) ... Setting up linux-headers-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. Setting up linux-image-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic update-initramfs: Generating /boot/initrd.img-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.4.0-030400-generic Found initrd image: /boot/initrd.img-3.4.0-030400-generic Found linux image: /boot/vmlinuz-3.2.0-24-generic Found initrd image: /boot/initrd.img-3.2.0-24-generic Found memtest86+ image: /memtest86+.bin Found Ubuntu 12.04 LTS (12.04) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows 7 (loader) on /dev/sda3 done /var/lib/dkms/nvidia-current/295.40/build/make.log: DKMS make.log for nvidia-current-295.40 for kernel 3.4.0-030400-generic (x86_64) Thu Jun 7 00:58:39 BST 2012 NVIDIA: calling KBUILD... test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions ; rm -f /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions/* make -f scripts/Makefile.build obj=/var/lib/dkms/nvidia-current/295.40/build cc -Wp,-MD,/var/lib/dkms/nvidia-current/295.40/build/.nv.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.6/include -I/usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include -Iarch/x86/include/generated -Iinclude -include /usr/src/linux-headers-3.4.0-030400-generic/include/linux/kconfig.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=1024 -Wno-unused-but-set-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -pg -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -I/var/lib/dkms/nvidia-current/295.40/build -Wall -MD -Wsign-compare -Wno-cast-qual -Wno-error -D__KERNEL__ -DMODULE -DNVRM -DNV_VERSION_STRING=\"295.40\" -Wno-unused-function -Wuninitialized -mno-red-zone -mcmodel=kernel -UDEBUG -U_DEBUG -DNDEBUG -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(nv)" -D"KBUILD_MODNAME=KBUILD_STR(nvidia)" -c -o /var/lib/dkms/nvidia-current/295.40/build/.tmp_nv.o /var/lib/dkms/nvidia-current/295.40/build/nv.c In file included from include/linux/kernel.h:19:0, from include/linux/sched.h:55, from include/linux/utsname.h:35, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:38, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: include/linux/bitops.h: In function ‘hweight_long’: include/linux/bitops.h:66:41: warning: signed and unsigned type in conditional expression [-Wsign-compare] In file included from /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess.h:577:0, from include/linux/poll.h:14, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:97, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h: In function ‘copy_from_user’: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h:53:6: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] In file included from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13:0: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h: At top level: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:114:75: fatal error: asm/system.h: No such file or directory compilation terminated. make[3]: *** [/var/lib/dkms/nvidia-current/295.40/build/nv.o] Error 1 make[2]: *** [_module_/var/lib/dkms/nvidia-current/295.40/build] Error 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: *** [module] Error 1 make: *** [module] Error 2

    Read the article

  • Loaded OBJ Model Will Not Display in OpenGL / C++ Project

    - by Drake Summers
    I have been experimenting with new effects in game development. The programs I have written have been using generic shapes for the visuals. I wanted to test the effects on something a bit more complex, and wrote a resource loader for Wavefront OBJ files. I started with a simple cube in blender, exported it to an OBJ file with just vertices and triangulated faces, and used it to test the resource loader. I could not get the mesh to show up in my application. The loader never gave me any errors, so I wrote a snippet to loop through my vertex and index arrays that were returned from the loader. The data is exactly the way it is supposed to be. So I simplified the OBJ file by editing it directly to just show a front facing square. Still, nothing is displayed in the application. And don't worry, I did check to make sure that I decreased the value of each index by one while importing the OBJ. - BEGIN EDIT I also tested using glDrawArrays(GL_TRIANGLES, 0, 3 ); to draw the first triangle and it worked! So the issue could be in the binding of the VBO/IBO items. END EDIT - INDEX/VERTEX ARRAY OUTPUT: GLOBALS AND INITIALIZATION FUNCTION: GLuint program; GLint attrib_coord3d; std::vector<GLfloat> vertices; std::vector<GLushort> indices; GLuint vertexbuffer, indexbuffer; GLint uniform_mvp; int initialize() { if (loadModel("test.obj", vertices, indices)) { GLfloat myverts[vertices.size()]; copy(vertices.begin(), vertices.end(), myverts); GLushort myinds[indices.size()]; copy(indices.begin(), indices.end(), myinds); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(myverts), myverts, GL_STATIC_DRAW); glGenBuffers(1, &indexbuffer); glBindBuffer(GL_ARRAY_BUFFER, indexbuffer); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(myinds), myinds, GL_STATIC_DRAW); // OUTPUT DATA FROM NEW ARRAYS TO CONSOLE // ERROR HANDLING OMITTED FOR BREVITY } GLint link_result = GL_FALSE; GLuint vert_shader, frag_shader; if ((vert_shader = create_shader("tri.v.glsl", GL_VERTEX_SHADER)) == 0) return 0; if ((frag_shader = create_shader("tri.f.glsl", GL_FRAGMENT_SHADER)) == 0) return 0; program = glCreateProgram(); glAttachShader(program, vert_shader); glAttachShader(program, frag_shader); glLinkProgram(program); glGetProgramiv(program, GL_LINK_STATUS, &link_result); // ERROR HANDLING OMITTED FOR BREVITY const char* attrib_name; attrib_name = "coord3d"; attrib_coord3d = glGetAttribLocation(program, attrib_name); // ERROR HANDLING OMITTED FOR BREVITY const char* uniform_name; uniform_name = "mvp"; uniform_mvp = glGetUniformLocation(program, uniform_name); // ERROR HANDLING OMITTED FOR BREVITY return 1; } RENDERING FUNCTION: glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(0.0, 0.0, -4.0)); glm::mat4 view = glm::lookAt(glm::vec3(0.0, 0.0, 4.0), glm::vec3(0.0, 0.0, 3.0), glm::vec3(0.0, 1.0, 0.0)); glm::mat4 projection = glm::perspective(45.0f, 1.0f*(screen_width/screen_height), 0.1f, 10.0f); glm::mat4 mvp = projection * view * model; int size; glUseProgram(program); glUniformMatrix4fv(uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glClearColor(0.5, 0.5, 0.5, 1.0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(attrib_coord3d); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glVertexAttribPointer(attrib_coord3d, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexbuffer); glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &size); glDrawElements(GL_TRIANGLES, size/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); glDisableVertexAttribArray(attrib_coord3d); VERTEX SHADER: attribute vec3 coord3d; uniform mat4 mvp; void main(void) { gl_Position = mvp * vec4(coord3d, 1.0); } FRAGMENT SHADER: void main(void) { gl_FragColor[0] = 0.0; gl_FragColor[1] = 0.0; gl_FragColor[2] = 1.0; gl_FragColor[3] = 1.0; } OBJ RESOURCE LOADER: bool loadModel(const char * path, std::vector<GLfloat> &out_vertices, std::vector<GLushort> &out_indices) { std::vector<GLfloat> temp_vertices; std::vector<GLushort> vertexIndices; FILE * file = fopen(path, "r"); // ERROR HANDLING OMITTED FOR BREVITY while(1) { char lineHeader[128]; int res = fscanf(file, "%s", lineHeader); if (res == EOF) { break; } if (strcmp(lineHeader, "v") == 0) { float _x, _y, _z; fscanf(file, "%f %f %f\n", &_x, &_y, &_z ); out_vertices.push_back(_x); out_vertices.push_back(_y); out_vertices.push_back(_z); } else if (strcmp(lineHeader, "f") == 0) { unsigned int vertexIndex[3]; int matches = fscanf(file, "%d %d %d\n", &vertexIndex[0], &vertexIndex[1], &vertexIndex[2]); out_indices.push_back(vertexIndex[0] - 1); out_indices.push_back(vertexIndex[1] - 1); out_indices.push_back(vertexIndex[2] - 1); } else { ... } } // ERROR HANDLING OMITTED FOR BREVITY return true; } I can edit the question to provide any further info you may need. I attempted to provide everything of relevance and omit what may have been unnecessary. I'm hoping this isn't some really poor mistake, because I have been at this for a few days now. If anyone has any suggestions or advice on the matter, I look forward to hearing it. As a final note: I added some arrays into the code with manually entered data, and was able to display meshes by using those arrays instead of the generated ones. I do not understand!

    Read the article

  • compiling openss7

    - by deddihp
    hello, i got an error while compiling openss7. Do you know what happen ? Thanks.... gcc -DHAVE_CONFIG_H -I. -I. -I. -DLFS=1 -imacros ./config.h -imacros ./include/sys/config.h -I. -I./include -I./include -nostdinc -iwithprefix include -DLINUX -D__KERNEL__ -I/usr/src/linux-headers-lbm-2.6.28-11-generic -I/lib/modules/2.6.28-11-generic/build/include -Iinclude2 -I/lib/modules/2.6.28-11-generic/build/include -I/lib/modules/2.6.28-11-generic/build/arch/x86/include -include /lib/modules/2.6.28-11-generic/build/include/linux/autoconf.h -Iubuntu/include -I/lib/modules/2.6.28-11-generic/build/ubuntu/include -I/lib/modules/2.6.28-11-generic/build/arch/x86/include/asm/mach-default '-DKBUILD_STR(s)=#s' '-DKBUILD_BASENAME=KBUILD_STR('`echo libLfS_specfs_a-specfs.o | sed -e 's,lib.*_a-,,;s,\.o,,;s,-,_,g'`')' -DMODULE -D__NO_VERSION__ -DEXPORT_SYMTAB -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -O2 -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i586 -mtune=generic -Wa,-mtune=generic32 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -ffreestanding -c -o libLfS_specfs_a-specfs.o `test -f 'src/kernel/specfs.c' || echo './'`src/kernel/specfs.c In file included from src/kernel/specfs.c:123: src/kernel/strspecfs.c: In function ‘specfs_init_cache’: src/kernel/strspecfs.c:1406: warning: passing argument 5 of ‘kmem_cache_create’ from incompatible pointer type src/kernel/strspecfs.c:1406: error: too many arguments to function ‘kmem_cache_create’ In file included from src/kernel/specfs.c:126: src/kernel/strlookup.c: In function ‘cdev_lookup’: src/kernel/strlookup.c:508: warning: format not a string literal and no format arguments src/kernel/strlookup.c:514: warning: format not a string literal and no format arguments src/kernel/strlookup.c:521: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘cdrv_lookup’: src/kernel/strlookup.c:562: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘fmod_lookup’: src/kernel/strlookup.c:604: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘cdev_search’: src/kernel/strlookup.c:709: warning: format not a string literal and no format arguments src/kernel/strlookup.c:716: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘fmod_search’: src/kernel/strlookup.c:768: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘cmin_search’: src/kernel/strlookup.c:823: warning: format not a string literal and no format arguments src/kernel/strlookup.c:830: warning: format not a string literal and no format arguments src/kernel/strlookup.c:840: warning: format not a string literal and no format arguments src/kernel/strlookup.c:848: warning: format not a string literal and no format arguments In file included from src/kernel/specfs.c:129: src/kernel/strattach.c: In function ‘check_mnt’: src/kernel/strattach.c:131: error: ‘struct vfsmount’ has no member named ‘mnt_namespace’ src/kernel/strattach.c:131: error: ‘struct task_struct’ has no member named ‘namespace’ src/kernel/strattach.c: In function ‘do_fattach’: src/kernel/strattach.c:200: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:200: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:200: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:203: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:208: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:208: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:208: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:226: error: implicit declaration of function ‘path_release’ src/kernel/strattach.c: In function ‘do_fdetach’: src/kernel/strattach.c:253: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:253: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:255: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:257: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:262: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:265: error: ‘struct nameidata’ has no member named ‘mnt’ In file included from src/kernel/specfs.c:132: src/kernel/strpipe.c: In function ‘do_spipe’: src/kernel/strpipe.c:372: warning: assignment discards qualifiers from pointer target type make[4]: *** [libLfS_specfs_a-specfs.o] Error 1 make[4]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G/streams-0.9.2.4' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G/streams-0.9.2.4' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G/streams-0.9.2.4' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G' make: *** [all] Error 2

    Read the article

  • Firefox not running jQuery for XHTML output

    - by ScottSEA
    Okay, I did a crappy job of describing the issue in my previous post. I think the discussion got sidetracked from the core issue - so I'm going to try again. Mea Culpa to Elzo Valugi about the aforementioned thread. I have an XML file: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="wtf.xsl"?> <Paragraphs> <Paragraph>Hi</Paragraph> </Paragraphs> Simple enough. I also have a stylesheet to create XHTML output: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns="http://www.w3.org/1999/xhtml" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" doctype-system="http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" doctype-public="-//W3C//DTD XHTML 1.0 Transitional//EN" omit-xml-declaration="yes"/> <xsl:template match="/*"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>FF-JS-XHTML WTF</title> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="wtf.js"></script> </head> <body> <xsl:apply-templates /> </body> </html> </xsl:template> <xsl:template match="Paragraph"> <p> <xsl:apply-templates /> </p> </xsl:template> </xsl:stylesheet> Last but not least, I have the following jQuery in toto (wtf.js, from the script tag in the stylesheet): $(function() { alert('Hiya!'); $('<p>Hello</p>').appendTo('body'); }); Extremely simple, but sufficient for the demonstration. When I run this in Internet Explorer, I get the alert 'Hiya!' as well as the expected: Hi Hello but when I run it in Firefox (3.0.1), I still get the alert, but the jQuery does not insert the paragraph into the DOM, and I just get this: Hi If I change the stylesheet to method="html" it works fine, and I get (along with the alert): Hi Hello Why doesn't Firefox run the jQuery with an XHTML document? Has anyone any experience with this problem? EDIT: ADDITIONAL INFO I can successfully insert elements into the documents this way in Firefox (method="xml"): var frag = document.createDocumentFragment(); var p = document.createElement('p'); p.appendChild(document.createTextNode('Ipsum Lorem')); frag.appendChild(p); $('body').append(frag); but I'm running into a similar problem with the .remove() method. It is looking more and more that Firefox doesn't construct a DOM from XML that jQuery can relate to, or somesuch.

    Read the article

  • XSLT: a variation on the pagination problem

    - by MarcoS
    I must transform some XML data into a paginated list of fields. Here is an example. Input XML: <?xml version="1.0" encoding="UTF-8"?> <data> <books> <book title="t0"/> <book title="t1"/> <book title="t2"/> <book title="t3"/> <book title="t4"/> </books> <library name="my library"/> </data> Desired output: <?xml version="1.0" encoding="UTF-8"?> <pages> <page number="1"> <field name="library_name" value="my library"/> <field name="book_1" value="t0"/> <field name="book_2" value="t1"/> </page> <page number="2"> <field name="book_1" value="t2"/> <field name="book_2" value="t3"/> </page> <page number="3"> <field name="book_1" value="t4"/> </page> </pages> In the above example I assume that I want at most 2 fields named book_n (with n ranging between 1 and 2) per page. Tags <page> must have an attribute number. Finally, the field named library_name must appear only the first <page>. Here is my current solution using XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0" exclude-result-prefixes="trx xs"> <xsl:output method="xml" indent="yes" omit-xml-declaration="no" /> <xsl:variable name="max" select="2"/> <xsl:template match="//books"> <xsl:for-each-group select="book" group-ending-with="*[position() mod $max = 0]"> <xsl:variable name="pageNum" select="position()"/> <page number="{$pageNum}"> <xsl:for-each select="current-group()"> <xsl:variable name="idx" select="if (position() mod $max = 0) then $max else position() mod $max"/> <field value="{@title}"> <xsl:attribute name="name">book_<xsl:value-of select="$idx"/> </xsl:attribute> </field> </xsl:for-each> <xsl:if test="$pageNum = 1"> <xsl:call-template name="templateFor_library"/> </xsl:if> </page> </xsl:for-each-group> </xsl:template> <xsl:template name="templateFor_library"> <xsl:for-each select="//library"> <field name="library_name" value="{@name}" /> </xsl:for-each> </xsl:template> </xsl:stylesheet> Is there a better/simpler way to perform this transformation?

    Read the article

  • Converting XML to RSS

    - by hkw
    I've got an xml feed that will be published in one location of our website, and would like to repurpose that for an RSS feed. A few different pages on the website will also be referencing the same xml - all of those transformations are set up and working. The base xml file (XMLTEST.xml) is using this structure: <POST> <item> <POST_ID>80000852</POST_ID> <POST_TITLE>title</POST_TITLE> <POST_CHANNEL>I</POST_CHANNEL> <POST_DESC>description</POST_DESC> <LINK>http://www...</LINK> <STOC>N</STOC> </item> </POST> I'm trying to transform the xml into an RSS feed using the following setup (feed.xml + rss.xsl) feed.xml: <?xml-stylesheet href="rss.xsl" type="text/xsl"?> <RSSChannels> <RSSChannel src="XMLTEST.xml"/> </RSSChannels> rss.xsl: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" omit-xml-declaration="yes" /> <xsl:template match="RSSChannels"> <rss version="2.0"> <channel> <title>site title</title> <link></link> <description>Site description...</description> <xsl:apply-templates /> </channel> </rss> </xsl:template> <xsl:template match="RSSChannel"> <xsl:apply-templates select="document(@src)" /> </xsl:template> <xsl:template match="item"> <xsl:choose> <xsl:when test="STOC = 'Y'"></xsl:when> <xsl:when test="POST_CHANNEL = 'I'"></xsl:when> <xsl:otherwise> <item> <title> <xsl:value-of select="*[local-name()='POST_TITLE']" /> </title> <link> <xsl:value-of select="*[local-name()='LINK']" /> </link> <description> <xsl:value-of select="*[local-name()='POST_DESC']" /> </description> </item> </xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> When I try to view the output of feed.xml in Firefox, all of the filtering is applied correctly (sorting out items that shouldn't be published in that channel), but the page outputs as plain text, rather than the usual feed-detection that takes place in Firefox. Any ideas on what I'm missing? Thanks for any suggestions you can provide.

    Read the article

  • Iteration, or copying in XSL, based on a count. How do I do it?

    - by LOlliffe
    I'm trying to create an inventory list (tab-delimited text, for the output) from XML data. The catch is, I need to take the line that I created, and list it multiple times (iterate ???), based on a number found in the XML. So, from the XML below: <?xml version="1.0" encoding="UTF-8"?> <library> <aisle label="AA"> <row>bb</row> <shelf>a</shelf> <books>4</books> </aisle> <aisle label="BB"> <row>cc</row> <shelf>d</shelf> <books>3</books> </aisle> </library> I need to take the value found in "books", and then copy the text line that number of times. The result looking like this: Aisle Row Shelf Titles AA bb a AA bb a AA bb a AA bb a BB cc d BB cc d BB cc d So that the inventory taker, can then write in the titles found on each shelf. I have the basic structure of my XSL, but I'm not sure how to do the "iteration" portion. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" xmlns:xd="http://www.oxygenxml.com/ns/doc/xsl" version="2.0"> <xsl:output omit-xml-declaration="yes"/> <xsl:variable name="tab" select="'&#09;'"/> <xsl:variable name="newline" select="'&#10;'"/> <xsl:template match="/"> <!-- Start Spreadsheet header --> <xsl:text>Aisle</xsl:text> <xsl:value-of select="$tab"/> <xsl:text>Row</xsl:text> <xsl:value-of select="$tab"/> <xsl:text>Shelf</xsl:text> <xsl:value-of select="$tab"/> <xsl:text>Titles</xsl:text> <xsl:value-of select="$newline"/> <!-- End spreadsheet header --> <!-- Start entering values from XML --> <xsl:for-each select="library/aisle"> <xsl:value-of select="@label"/> <xsl:value-of select="$tab"/> <xsl:value-of select="row"/> <xsl:value-of select="$tab"/> <xsl:value-of select="shelf"/> <xsl:value-of select="$tab"/> <xsl:value-of select="$tab"/> <xsl:value-of select="$newline"/> </xsl:for-each> <!-- End of values from XML --> <!-- Iteration of the above needed, based on count value in "books" --> </xsl:template> </xsl:stylesheet> Any help would be greatly appreciated. For starters, is "iteration" the right term to be using for this? Thanks!

    Read the article

  • R: extracting "clean" UTF-8 text from a web page scraped with RCurl

    - by SlowLearner
    Using R, I am trying to scrape a web page save the text, which is in Japanese, to a file. Ultimately this needs to be scaled to tackle hundreds of pages on a daily basis. I already have a workable solution in Perl, but I am trying to migrate the script to R to reduce the cognitive load of switching between multiple languages. So far I am not succeeding. Related questions seem to be this one on saving csv files and this one on writing Hebrew to a HTML file. However, I haven't been successful in cobbling together a solution based on the answers there. The pages are from Yahoo! Japan Finance and my Perl code that looks like this. use strict; use HTML::Tree; use LWP::Simple; #use Encode; use utf8; binmode STDOUT, ":utf8"; my @arr_links = (); $arr_links[1] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203"; $arr_links[2] = "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201"; foreach my $link (@arr_links){ $link =~ s/"//gi; print("$link\n"); my $content = get($link); my $tree = HTML::Tree->new(); $tree->parse($content); my $bar = $tree->as_text; open OUTFILE, ">>:utf8", join("","c:/", substr($link, -4),"_perl.txt") || die; print OUTFILE $bar; } This Perl script produces a CSV file that looks like the screenshot below, with proper kanji and kana that can be mined and manipulated offline: My R code, such as it is, looks like the following. The R script is not an exact duplicate of the Perl solution just given, as it doesn't strip out the HTML and leave the text (this answer suggests an approach using R but it doesn't work for me in this case) and it doesn't have the loop and so on, but the intent is the same. require(RCurl) require(XML) links <- list() links[1] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7203" links[2] <- "http://stocks.finance.yahoo.co.jp/stocks/detail/?code=7201" txt <- getURL(links, .encoding = "UTF-8") Encoding(txt) <- "bytes" write.table(txt, "c:/geturl_r.txt", quote = FALSE, row.names = FALSE, sep = "\t", fileEncoding = "UTF-8") This R script generates the output shown in the screenshot below. Basically rubbish. I assume that there is some combination of HTML, text and file encoding that will allow me to generate in R a similar result to that of the Perl solution but I cannot find it. The header of the HTML page I'm trying to scrape says the chartset is utf-8 and I have set the encoding in the getURL call and in the write.table function to utf-8, but this alone isn't enough. The question How can I scrape the above web page using R and save the text as CSV in "well-formed" Japanese text rather than something that looks like line noise? Edit: I have added a further screenshot to show what happens when I omit the Encoding step. I get what look like Unicode codes, but not the graphical representation of the characters. So it may be some kind of locale-related issue, but in the exact same locale the Perl script does provide useful output. So this is still puzzling.

    Read the article

  • Reading serialised object from file

    - by nico
    Hi everyone. I'm writing a little Java program (it's an ImageJ plugin, but the problem is not specifically ImageJ related) and I have some problem, most probably due to the fact that I never really programmed in Java before... So, I have a Vector of Vectors and I'm trying to save it to a file and read it. The variable is defined as: Vector <Vector <myROI> > ROIs = new Vector <Vector <myROI> >(); where myROI is a class that I previously defined. Now, to write the vector to a file I use: void saveROIs() { SaveDialog save = new SaveDialog("Save ROIs...", imp.getTitle(), ".xroi"); String name = save.getFileName(); if (name == null) return; String dir = save.getDirectory(); try { FileOutputStream fos = new FileOutputStream(dir+name); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(ROIs); oos.close(); } catch (Exception e) { IJ.log(e.toString()); } } This correctly generates a binary file containing (I suppose) the object ROIs. Now, I use a very similar code to read the file: void loadROIs() { OpenDialog open = new OpenDialog("Load ROIs...", imp.getTitle(), ".xroi"); String name = open.getFileName(); if (name == null) return; String dir = open.getDirectory(); try { FileInputStream fin = new FileInputStream(dir+name); ObjectInputStream ois = new ObjectInputStream(fin); ROIs = (Vector <Vector <myROI> >) ois.readObject(); // This gives error ois.close(); } catch (Exception e) { IJ.log(e.toString()); } } But this function does not work. First, I get a warning: warning: [unchecked] unchecked cast found : java.lang.Object required: java.util.Vector<java.util.Vector<myROI>> ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ I Googled for that and see that I can suppress by prepending @SuppressWarnings("unchecked"), but this just makes things worst, as I get an error: <identifier> expected ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ In any case, if I omit @SuppressWarnings and ignore the warning, the object is not read and an exception is thrown java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: myROI Again Google tells me myROI needs to implements Serializable. I tried just adding implements Serializable to the class definition, but it is not sufficient. Can anyone give me some hints on how to procede in this case? Also, how to get rid of the typecast warning?

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MVVM Light V4 preview (BL0014) release notes

    - by Laurent Bugnion
    I just pushed to Codeplex an update to the MVVM Light source code. This is an early preview containing some of the features that I want to release later under the version 4. If you find these features useful for your project, please download the source code and build the assemblies. I will appreciate greatly any issue report. This version is labeled “V4.0.0.0/BL0014”. The “BL” string is an old habit that we used in my days at Siemens Building Technologies, called a “base level”. Somehow I like this way of incrementing the “base level” independently of any other consideration (such as alpha, beta, CTP, RTM etc) and continue to use it to tag my software versions. In Microsoft parlance, you could say that this is an early CTP of MVVM Light V4. Caveat The code is unit tested, but as we all know this does not mean that there are no bugs This code has not yet been used in production. Again, your help in testing this is greatly appreciated, so please report all bugs to me! What’s new? The following features have been implemented: Misc Various “maintenance work”. All WPF assemblies (that is .NET35 and .NET4) now allow partially trusted callers. It means that you can use them in am XBAP in partial trust mode. Testing Various test updates Added Windows Phone 7 unit tests Note: For Windows Phone 7, due to an issue in the unit test framework, not all tests can be executed. I had to isolate those tests for the moment. The error was reported to Microsoft. ViewModelBase The constructor is now public to allow serialization (especially useful on the phone to tombstone the state). ViewModelBase.MessengerInstance now returns Messenger.Default unless it is set explicitly. Previously, MessengerInstance was returning null, which was complicating the code. Two new ways to raise the PropertyChanged event have been added. See below for details. Messenger Updated the IMessenger interface with all public members from the Messenger class. Previously some members were missing. A new Unregister method is now available, allowing to unregister a recipient for a given token. RelayCommand RaiseCanExecuteChanged now acts the same in Windows Presentation Foundation than in Silverlight. In previous versions, I was relying on the CommandManager to raise the CanExecuteChanged event in WPF. However, it was found to be too unreliable, and a more direct way of raising the event was found preferable. See below for details. Raising the PropertyChanged event A very much requested update is now included: the ability to raise the PropertyChanged event in a viewmodel without using “magic strings”. Personally, I don’t see strings as a major issue, thanks to two features of the MVVM Light Toolkit: In the DEBUG configuration, every time that the RaisePropertyChanged method is called, the name of the property is checked against all existing properties of the viewmodel. Should the property name be misspelled (because of a typo or refactoring), an exception is thrown, notifying the developer that something is wrong. To avoid impacting the performance, this check is only made in DEBUG configuration, but that should be enough to warn the developers in case they miss a rename. The property name is defined as a public constant in the “mvvminpc” code snippet. This allows checking the property name from another class (for example if the PropertyChanged event is handled in the view). It also allows changing the property name in one place only. However, these two safeguards didn’t satisfy some of the users, who requested another way to raise the PropertyChanged event. In V4, you can now do the following: Using lambdas private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(() => MyProperty); } } This raises the property changed event using a lambda expression instead of the property name. Light reflection is used to get the name. This supports Intellisense and can easily be refactored. You can also broadcast a PropertyChangedMessage using the Messenger.Default instance with: private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(() => MyProperty, oldValue, value, true); } } Using no arguments When the RaisePropertyChanged method is called within a setter, you can also omit the property name altogether. This will fail if executed outside of the setter however. Also, to avoid confusion, there is no way to broadcast the PropertyChangedMessage using this syntax. private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(); } } The old way Of course the “old” way is still supported, without broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); } } And with broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(MyPropertyName, oldValue, value, true); } } Performance considerations It is notorious that using reflection takes more time than using a string constant to get the property name. However, after measuring for all platforms, I found the differences to be very small. I will measure more and submit the results to the community for evaluation, because some of the results are actually surprising (for example, using the Messenger to broadcast a PropertyChangedMessage does not significantly increase the time taken to raise the PropertyChanged event and update the bindings). For now, I submit this code to you, and would be delighted to hear about your own results. Raising the CanExecuteChanged event manually In WPF, until now, the CanExecuteChanged event for a RelayCommand was raised automatically. Or rather, it was attempted to be raised, using a feature that is only available in WPF called the CommandManager. This class monitors the UI and when something occurs, it queries the state of the CanExecute delegate for all the commands. However, this proved unreliable for the purpose of MVVM: Since very often the value of the CanExecute delegate changes according to non-UI events (for example something changing in the viewmodel or in the model), raising the CanExecuteChanged event manually is necessary. In Silverlight, the CommandManager does not exist, so we had to raise the event manually from the start. This proved more reliable, and I now changed the WPF implementation of the RaiseCanExecuteChanged method to be the exact same in WPF than in Silverlight. For instance, if a command must be enabled when a string property is set to a value other than null or empty string, you can do: public MainViewModel() { MyTestCommand = new RelayCommand( () => DoSomething(), () => !string.IsNullOrEmpty(MyProperty)); } public const string MyPropertyName = "MyProperty"; private string _myProperty = string.Empty; public string MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); MyTestCommand.RaiseCanExecuteChanged(); } } Logo update I made a minor change to the logo: Some people found the lack of the word “light” (as in MVVM Light Toolkit) confusing. I thought it was cool, because the feather suggests the idea of lightness, however I can see the point. So I added the word “light” to the logo. Things should be quite clear now. What’s next? This is only the first of a series of releases that will bring MVVM Light to V4. In the next weeks, I will continue to add some very requested features and correct some issues in the code. I will probably continue this fashion of releasing the changes to the public as source code through Codeplex. I would be very interested to hear what you think of that, and to get feedback about the changes. Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • CodePlex Daily Summary for Friday, March 23, 2012

    CodePlex Daily Summary for Friday, March 23, 2012Popular ReleasesSSIS Multiple Hash: Multiple Hash V1.4.1: This is a feature release. It adds the ability to select multiple rows, and change the selection of these with a single click in the check box. All lists with check boxes have this ability. It is backwards compatible with previous versions. Please ensure that you select the appropriate download file based on the version of SQL Server that you will be using. If you have used the Denali installer from V1.4 for packages that you wish to now use the SQL 2012 release version, then you MUST ma...SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 14: 1. improved accuracy of logic fault checking in analysisMetodología General Ajustada - MGA: 02.02.02: Cambios John: Cambios en los cálculos del Flujo de Caja, Flujo Económico y Resumen EF y ES. Se visualiza el reporte de Resumen EF y ES en Grilla. Se ajustan formularios de llamado. Cambios Parmenio: Cambios en el formularios de Programaciòn.MapWindow 6 Desktop GIS: MapWindow 6.1.1: MapWindow 6 Desktop GIS is an open source desktop GIS for Microsoft Windows that is built upon the DotSpatial Library. This release requires .Net 4 (Client Profile). Are you a software developer?Instead of downloading MapWindow for development purposes, get started with with the DotSpatial templateDotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2012.1.321.20: major update, new Workspaces and UIAdapters Workspaces: - RadDockWorkspace - RadPageViewWorkspace - RadFormWorkspace - RadFormMdiWorkspace - RadTabbedMdiWorkspace UI Adapters: - RadCommandBarUIAdapter - RadRibbonBarUIAdapter - RadTreeNodeUiAdapter - RadTreeViewUIAdapter - RadItemCollectionUIAdapter - (RadMenu, RadStatusStrip, all controls that support RadItem collections)Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...Speed up Printer migration using PrintBrm and it's configuration files: BRMC.EXE: Run the tool from the extracted directory of the printbrm backup. You can use the following command to extract a backup file to a directory - PRINTBRM.EXE -R -D C:\TEMP\EXPAND -F C:\TEMP\PRINTERBACKUP.PRINTEREXPORTAppBarUtils for Windows Phone SDK 7.1: AppBarUtils 1.2: This release contains IconUri dependency property for both AppBarItemCommand and AppBarItemTrigger as requested by shawnoster at http://appbarutils.codeplex.com/discussions/321745. When using this IconUri dependency property, please be sure to set the Type property to AppBarItemType.Button or just omit this property entirely, because it is only for app bar icon button. The demo has been updated to show how to use this new IconUri dependency property with a new lock button on the app bar. Wh...Offline Navigation for Windows Phone 7: 0.1 Alpha: This is the 0.1 alpha release of source code.SmartNet: V1.0.0.0: DY SmartNet ?????? V1.0New Projects2Sexy Content for DotNetNuke - great looking and animated content: 2Sexy Content is a DotNetNuke Extension to create attractive and designed content. It solves the common problem, allowing the web designer to create designed templates for different content elements, so that the user must only fill in fields and receive a perfectly designed and animated output. AgileMapper: AgileMapper????????????????????,?????????dto?do???????AksiMata: Berita terbaru dan peristiwa terkini di sekitar Anda... Langsung dari TKP!Caprice: Engineering adaptive privacy: Caprice is a tool aimed at supporting software engineers in the design of applications that appropriately adapt their behaviour to mitigate privacy threats. This tool helps provide software engineers design-time insight on the functional behaviour a system and associated runtime context changes that can threaten privacy.Change default Share-site group SharePoint Online (Office 365): As default when we share a site collection or site with external users, SharePoint Online show default SharePoint groups which are Visitors and Members. By using this feature, you will get a link which you can use to customize the default groups to your custom groups and other default groups. CodePlex Test Project: This is just to test how well CodePlex handles Git.Find Work Items For Source Items (Visual Studio Extension): work4source is a Visual Studio 2010 tool window which finds and lists all TFS work items associated with a specific source control file or folder, avoiding the need to view the details for every changeset. It also includes "versioned item" links which otherwise cannot be found from the source side of the link using Visual Studio. To install run the VSIX package, restart Visual Studio, then select View --> Other Windows --> "Find Work Items for Source Items" and either leave the window ...FSProject: FSProjectGit c9 Test: testing git to c9 integration possibilitiesImage 3D Viewer: A Image 3D Viewer with WPF.Jakarta Guide: Jakarta news featuring up-to-date information on attractions, hotels, restaurants, nightlife, travel tips and more.LinqLucene: Due to the fact to the original LinqToLucene project seems to have died, I have created this new project to carry on it's workLuskyCode: Just some codemaouidatest: testMarktplace: NLocalize MarketplaceMemoryLifter: MemoryLifter - the fastest way to memorize * is a virtual flashcard system, scientifically based on the Leitner card box algorithm * enables the user to lift any kind of information into long term memory * maximizes study efficiency with automation, controlled repetitionMemoryTributary - A replacement for MemoryStream: MemoryTributary is a replacement for MemoryStream that uses multiple memory chunks as its backing store, as opposed to the single byte array used by MemoryStream. The result is it can handle much larger streams and the initial allocations are more efficient. It's developed in C#.my career muse: Project configured for simultaneous use with other developersMyMusicBox: Mini-Projet MBDS Web 2.0 Michel BuffaNucleo.NET ORM: These components provide a unit of work interface that works with LINQ to SQL, Entity Framework, and Entity Framework Code First, with more ORMs to come. The idea is to create one common wrapper and base framework to make it easier to work with the various ORM products.Orchard QnA: A lightweight discussions module for the Orchard CMS.Orchard Shoutbox: A lightweight shoutbox module for the Orchard CMS:PAIN: My projects from PAIN labs, semester 2012L, department of electronics of Warsaw University of Technology.Reactifier: This project is for the windows 8 Shoutcast MediaStreamSource: Shoutcast MediaStreamSource is a MediaStreamSource implementation of the Shoutcast protocol for Silverlight. This MediaStreamSource allows both Silverlight 4+ OOB and Windows Phone 7 applications to consume a Shoutcast stream using a MediaElement. Currently, Mp3 and AAC+ Shoutcast streams are supported on Windows Phone. However, ONLY Mp3 is supported on Desktop Silverlight. There is also limited (i.e. somewhat untested) M3u and PLS playlist support. Please report any issues playing...Simple Task Manager: Politechnika Wroclawska Team Project - 2012SkyGo Media Commander: SkyGo Media Commander permette di usare le frecce della tastiera per cambiare canale e il tasto invio per aprire il menu : "Telecomando". Utile se si usa un telecomando. Perfettamente funzionante con il telecomando CIR del notebook HP DV6 2137el.sunshine Design: sunshinetesting: testing projecttesttom032012git01: testtom032012git01testtom03222012hg01: testtom03222012hg01testtom03222012tfs01: testtom03222012tfs01testtom03222012tfs02: testtom03222012tfs02TileSet Map Editor: a small side project of a tileset map editorTraffic Light Simulation Application: Traffic Light Simulation application simulates traffic on a 2D plane under different traffic light control schemes. The interface will display a 10x10 grid where each street allows one-way traffic and there is one traffic light at each intersection.Type08ScreenCapture: Type08ScreenCapture brings Windows 8-like desktop capture function to Windows 7. If you type [Windows]+[PrintScreen], it capture a main display area and save the bitmap to your picture folder. It's developed in C#/WPF.WikiPlex – a Regex Wiki Engine: A regular expression based wiki engine allowing developers to integrate a wiki experience into an existing .NET applicationWindowsQR: Windows QR is a proof of concept about capturing a qr code using a webcam in a WPF application runinng in a desktop computer with Windows 7 or similars. It uses AForge.Vision to wrap the DirectShow complexity and zxing library to decode the qr code.Working with Social Data: 1)Customizing Tag Cloud By Accountname 2)RatingWPF 3D-Model Viewer: WPF 3D-Model Viewer

    Read the article

  • CodePlex Daily Summary for Thursday, March 22, 2012

    CodePlex Daily Summary for Thursday, March 22, 2012Popular ReleasesTelerik CAB Enabling Kit for RadControls for WinForms: TCEK 2012.1.321.20: major update, new Workspaces and UIAdapters Workspaces: - RadDockWorkspace - RadPageViewWorkspace - RadFormWorkspace - RadFormMdiWorkspace - RadTabbedMdiWorkspace UI Adapters: - RadCommandBarUIAdapter - RadRibbonBarUIAdapter - RadTreeNodeUiAdapter - RadTreeViewUIAdapter - RadItemCollectionUIAdapter - (RadMenu, RadStatusStrip, all controls that support RadItem collections)People's Note: People's Note 0.40: Version 0.40 adds an option to compact the database from the profile screen. Compacting a database can make it smaller and faster by removing empty spaces left over by editing, moving, and deleting notes. To install: copy the appropriate CAB file onto your WM device and run it.Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 13: 1. added logic fault checking in analysis. automatically detect dead loop or memory leakage in stored procedures, for details please refer to http://sqlmon.codeplex.com/workitem/32469WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Metodología General Ajustada - MGA: 02.02.01: Cambios John: Se actualizan los seis formularios de Identificaciòn para que despuès de guardar actualice las grillas, de tal manera que no se dupliquen los registros al guardar. Se genera instalador con los cambios y se actualiza la base datos con ùltimos cambios en el SP de Flujo de Caja.xyzzy+: xyzzy+ 0.2.2.235+0: SHA1: 4a0258736e7df52bb6e2304178b7fcf02414ae17 PrerequisitesMicrosoft Visual C++ 2010 SP1 Redistributable Package (x86) (ja) FeaturesUnicode Visual Style Known ProblemsCharacter encodings other than Shift_JIS and UTF-X may be broken. Functions related to character encodings may not work. (ex. iso-code-char)Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...AppBarUtils for Windows Phone SDK 7.1: AppBarUtils 1.2: This release contains IconUri dependency property for both AppBarItemCommand and AppBarItemTrigger as requested by shawnoster at http://appbarutils.codeplex.com/discussions/321745. When using this IconUri dependency property, please be sure to set the Type property to AppBarItemType.Button or just omit this property entirely, because it is only for app bar icon button. The demo has been updated to show how to use this new IconUri dependency property with a new lock button on the app bar. Wh...Offline Navigation for Windows Phone 7: 0.1 Alpha: This is the 0.1 alpha release of source code.SmartNet: V1.0.0.0: DY SmartNet ?????? V1.0Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.7: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valueNew ProjectsActivities.WMI: WF4 ?????? WMI ??????????????? Activities library related WMI, available in WF4Append Customisation Service: A lightweight windows service for applying customisations to enterprise webapps: monitors a file and makes sure your code is always appended to the end of the file. Disable the service, and your customisations go away. c# .net 4. Useful for customising branding/design, javascript, css and so on - in applications such as Dynamics CRM 2011.ASP.NET Security Module: Modulo de seguridad para aplicaciones Web asp.netavgdx: This project is just for Directx 11 learningDaabli: A lightweight C# version of the Daabli serialization framework for C++. If your application needs to load objects and data from human readable text files, then Daabli could be useful to you. It is designed to be as easy to use as possible and works with a 'C' style human editable format. The original C++ version is available here: http://daabli.sourceforge.net/ DNN Simple Tweet: DNN Simple Tweet is a simple DotNetNuke module for display of Twitter Feeds. Using the stock Twitter Profile Widget, its settings allow you to change the Twitter User name, Tweet colors, etc. Developed in VB. DNN version 6 and higher required. *NUISANCE* When selecting between "Display All" and "Timed Interval" for Twitter Behavior, the iColorPicker for the colors will disappear. This is due to the post-back which is occurring and the use of Javascript of the color picker. Updat...Don't We-KC and Sway: Don't We-KC and SwayEksponent CropUp: CropUp is a simple geometric algorithm for "weighted auto cropping". A focus point and optional "area of interest" are defined (e.g. a face in a group pictures). These are shown instead of random stomachs when the picture needs to be sized to a specific format. Umbraco package.Enterprise Modular Application: Guidelines to create Enterprise modular applications in .NET framework, independent of any specific framework.Escape From Canyon: Escape from canyon is a little game project (for academic purpose only) developed using XNA 4.0 and F#FIX Sample code using QuickFIX to connect to TT FIX Adapter: Sample FIX client provides a starting place for developers to connect to the TT FIX Adapter. It's developed in C# using QuickFIX as the FIX engine.FSGreeNetWork: FSGreeNetWorkGac Library -- C++ Utilities for GPU Accelerated GUI and Script: C++ Utilities for GPU Accelerated GUI and ScriptJhVirtualKeyboard for WPF and Silverlight: JhVirtualKeyboard is a virtual-keyboard for software developers to use with either WPF or Silverlight projects. With it - you now have a simple way to provide your users with the ability to enter characters in different languages and alphabets, or any Unicode character. Developed in C#, using Visual Studio 2010 and .NET 4 More information can be found on my blog article at: http://designforge.wordpress.com/2011/01/06/jhvirtualkeyboard/ by James W. HurstKrishaWeb: Krisha webNetFrameworkExtensions: Simple plain framework to add a lot of features to .NET framework core. Most of the features are added in form of extension methods.pav2: proyecto pav 2 SharePoint (2010) Connected Server: Display which web server a user is connected to in the Personal Actions drop down menu (user name in upper right). An extremely useful aid when troubleshooting issues in a multi-server SharePoint environment. I took the exisiting version which ran on MOSS 2007 and rebuilt it to run on SharePoint 2010. <b>Acknowledgements:</b> Full credit goes to Nathan Yorke for the original project http://spconnectedserver.codeplex.com which worked on MOSS 2007. SharePoint 2010 Gauge Web Part: SharePoint 2010 Gauge Web Part can be connected to any column of the “number” type in SharePoint list includes External Lists (only the farm solution version) and show calculations on column values. It supports 2 views: 1. The Gauge View 2. The Simple Indicator View SMTP Test Suite: SMTP server/client that can be used to test other servers/clients. Purely a test tool as most SMTP verbs are accepted without any checking.StarterCSS: StarterCorev4.css is inspired from Starter.master. This CSS file give you detailed explanation on the different class files on corev4.css. StarterCorev4.css will make you understand the purpose of each class files when you do ctrl + click on your master page css class.Triangle.NET: Triangle.NET is a 2D meshing software written in C#. It generates (constrained) Delaunay triangulations and quality meshes of point sets or planar straight line graphs. It is a port of Jonathan Shewchuk's Triangle software written in C.WorkFile: workfile about codeioXNA Electric Effect: An electric effect implemented using XNA 4 fro Windows Phone 7. It provides an easy way to configure settings to create realistic electric effects, lightening effects, etc.

    Read the article

  • Securing an ADF Application using OES11g: Part 2

    - by user12587121
    To validate the integration with OES we need a sample ADF Application that is rich enough to allow us to test securing the various ADF elements.  To achieve this we can add some items including bounded task flows to the application developed in this tutorial. A sample JDeveloper 11.1.1.6 project is available here. It depends on the Fusion Order Demo (FOD) database schema which is easily created using the FOD build scripts.In the deployment we have chosen to enable only ADF Authentication as we will delegate Authorization, mostly, to OES.The welcome page of the application with all the links exposed looks as follows: The Welcome, Browse Products, Browse Stock and System Administration links go to pages while the Supplier Registration and Update Stock are bounded task flows.  The Login link goes to a basic login page and once logged in a link is presented that goes to a logout page.  Only the Browse Products and Browse Stock pages are really connected to the database--the other pages and task flows do not really perform any operations on the database. Required Security Policies We make use of a set of test users and roles as decscribed on the welcome page of the application.  In order to exercise the different authorization possibilities we would like to enforce the following sample policies: Anonymous users can see the Login, Welcome and Supplier Registration links. They can also see the Welcome page, the Login page and follow the Supplier Registration task flow.  They can see the icon adjacent to the Login link indicating whether they have logged in or not. Authenticated users can see the Browse Product page. Only staff granted the right can see the Browse Product page cost price value returned from the database and then only if the value is below a configurable limit. Suppliers and staff can see the Browse Stock links and pages.  Customers cannot. Suppliers can see the Update Stock link but only those with the update permission are allowed to follow the task flow that it launches.  We could hide the link but leave it exposed here so we can easily demonstrate the method call activity protecting the task flow. Only staff granted the right can see the System Administration link and the System Administration page it accesses. Implementing the required policies In order to secure the application we will make use of the following techniques: EL Expressions and Java backing beans: JSF has the notion of EL expressions to reference data from backing Java classes.  We use these to control the presentation of links on the navigation page which respect the security contraints.  So a user will not see links that he is not allowed to click on into. These Java backing beans can call on to OES for an authorization decision.  Important Note: naturally we would configure the WLS domain where our ADF application is running as an OES WLS SM, which would allow us to efficiently query OES over the PEP API.  However versioning conflicts between OES 11.1.1.5 and ADF 11.1.1.6 mean that this is not possible.  Nevertheless, we can make use of the OES RESTful gateway technique from this posting in order to call into OES. You can easily create and manage backing beans in Jdeveloper as follows: Custom ADF Phase Listener: ADF extends the JSF page lifecycle flow and allows one to hook into the flow to intercept page rendering.  We use this to put a check prior to rendering any protected pages, again calling on to OES via the backing bean.  Phase listeners are configured in the adf-settings.xml file.  See the MyPageListener.java class in the project.  Here, for example,  is the code we use in the listener to check for allowed access to the sysadmin page, navigating back to the welcome page if authorization is not granted:                         if (page != null && (page.equals("/system.jspx") || page.equals("/system"))){                             System.out.println("MyPageListener: Checking Authorization for /system");                             if (getValue("#{oesBackingBean.UIAccessSysAdmin}").toString().equals("false") ){                                   System.out.println("MyPageListener: Forcing navigation away from system" +                                       "to welcome");                                 NavigationHandler nh = fc.getApplication().getNavigationHandler();                                   nh.handleNavigation(fc, null, "welcome");                               } else {                                 System.out.println("MyPageListener: access allowed");                              }                         } Method call activity: our app makes use of bounded task flows to implement the sequence of pages that update the stock or allow suppliers to self register.  ADF takes care of ensuring that a bounded task flow can be entered by only one page.  So a way to protect all those pages is to make a call to OES in the first activity and then either exit the task flow or continue depending on the authorization decision.  The method call returns a String which contains the name of the transition to effect. This is where we configure the method call activity in JDeveloper: We implement each of the policies using the above techniques as follows: Policies 1 and 2: as these policies concern the coarse grained notions of controlling access to anonymous and authenticated users we can make use of the container’s security constraints which can be defined in the web.xml file.  The allPages constraint is added automatically when we configure Authentication for the ADF application.  We have added the “anonymousss” constraint to allow access to the the required pages, task flows and icons: <security-constraint>    <web-resource-collection>      <web-resource-name>anonymousss</web-resource-name>      <url-pattern>/faces/welcome</url-pattern>      <url-pattern>/afr/*</url-pattern>      <url-pattern>/adf/*</url-pattern>      <url-pattern>/key.png</url-pattern>      <url-pattern>/faces/supplier-reg-btf/*</url-pattern>      <url-pattern>/faces/supplier_register_complete</url-pattern>    </web-resource-collection>  </security-constraint> Policy 3: we can place an EL expression on the element representing the cost price on the products.jspx page: #{oesBackingBean.dataAccessCostPrice}. This EL Expression references a method in a Java backing bean that will call on to OES for an authorization decision.  In OES we model the authorization requirement by requiring the view permission on the resource /MyADFApp/data/costprice and granting it only to the staff application role.  We recover any obligations to determine the limit.  Policy 4: is implemented by putting an EL expression on the Browse Stock link #{oesBackingBean.UIAccessBrowseStock} which checks for the view permission on the /MyADFApp/ui/stock resource. The stock.jspx page is protected by checking for the same permission in a custom phase listener—if the required permission is not satisfied then we force navigation back to the welcome page. Policy 5: the Update Stock link is protected with the same EL expression as the Browse Link: #{oesBackingBean.UIAccessBrowseStock}.  However the Update Stock link launches a bounded task flow and to protect it the first activity in the flow is a method call activity which will execute an EL expression #{oesBackingBean.isUIAccessSupplierUpdateTransition}  to check for the update permission on the /MyADFApp/ui/stock resource and either transition to the next step in the flow or terminate the flow with an authorization error. Policy 6: the System Administration link is protected with an EL Expression #{oesBackingBean.UIAccessSysAdmin} that checks for view access on the /MyADF/ui/sysadmin resource.  The system page is protected in the same way at the stock page—the custom phase listener checks for the same permission that protects the link and if not satisfied we navigate back to the welcome page. Testing the Application To test the application: deploy the OES11g Admin to a WLS domain deploy the OES gateway in a another domain configured to be a WLS SM. You must ensure that the jps-config.xml file therein is configured to allow access to the identity store, otherwise the gateway will not b eable to resolve the principals for the requested users.  To do this ensure that the following elements appear in the jps-config.xml file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> download the sample application and change the URL to the gateway in the MyADFApp OESBackingBean code to point to the OES Gateway and deploy the application to an 11.1.1.6 WLS domain that has been extended with the ADF JRF files. You will need to configure the FOD database connection to point your database which contains the FOD schema. populate the OES Admin and OES Gateway WLS LDAP stores with the sample set of users and groups.  If  you have configured the WLS domains to point to the same LDAP then it would only have to be done once.  To help with this there is a directory called ldap_scripts in the sample project with ldif files for the test users and groups. start the OES Admin console and configure the required OES authorization policies for the MyADFApp application and push them to the WLS SM containing the OES Gateway. Login to the MyADFApp as each of the users described on the login page to test that the security policy is correct. You will see informative logging from the OES Gateway and the ADF application to their respective WLS consoles. Congratulations, you may now login to the OES Admin console and change policies that will control the behaviour of your ADF application--change the limit value in the obligation for the cost price for example, or define Role Mapping policies to determine staff access to the system administration page based on user profile attributes. ADF Development Notes Some notes on ADF development which are probably typical gotchas: May need this on WLS startup in order to allow us to overwrite credentials for the database, the signal here is that there is an error trying to access the data base: -Djps.app.credential.overwrite.allowed=true Best to call Bounded Task flows via a CommandLink (as opposed to a go link) as you cannot seem to start them again from a go link, even having completed the task flow correctly with a return activity. Once a bounded task flow (BTF) is initated it must complete correctly  via a return activity—attempting to click on any other link whilst in the context of a  BTF has no effect.  See here for example: When using the ADF Authentication only security approach it seems to be awkward to allow anonymous access to the welcome and registration pages.  We can achieve anonymous access using the web.xml security constraint shown above (where no auth-constraint is specified) however it is not clear what needs to be listed in there….for example the /afr/* and /adf/* are in there by trial and error as sometimes the welcome page will not render if we omit those items.  I was not able to use the default allPages constraint with for example the anonymous-role or the everyone WLS group in order to be able to allow anonymous access to pages. The ADF security best practice advises placing all pages under the public_html/WEB-INF folder as then ADF will not allow any direct access to the .jspx pages but will only allow acces via a link of the form /faces/welcome rather than /faces/welcome.jspx.  This seems like a very good practice to follow as having multiple entry points to data is a source of confusion in a web application (particulary from a security point of view). In Authentication+Authorization mode only pages with a Page definition file are protected.  In order to add an emty one right click on the page and choose Go to Page Definition.  This will create an empty page definition and now the page will require explicit permission to be seen. It is advisable to give a unique context root via the weblogic.xml for the application, as otherwise the application will clash with any other application with the same context root and it will not deploy

    Read the article

  • C# RegEx - find html tags (div and anchor)

    - by czesio
    Hi I have to retrieve several div section (of specific class name "row ") with it's content, and additionally find all anchor tags (link urls) (with class "underline red bold"). Shortly speaing : get section of: ... (divs, tags ...) and collections of urls string[] urls = {"/searchClickThru? pid=prod56534895&q=&rpos=109181&rpp=10&_dyncharset=UTF-8&sort=&url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"} the entire page looks like that: <html> ... a lot of stuff <div class="row "> <div class="photo"> <a rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> <img alt="alt msg" src="/b/s/b9/03/b9038292d147a582add07ee1f0607827.jpg"> </a> </div> <div class="desc"> <div class="l1"> <div class="icons"> </div> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td> <div class="fleft"> <a class="underline red bold" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> Culture And Gender <br>Intimate Relation</a> </div> <div class="fleft"> </div> </td> </tr> </tbody> </table> </div> <div class="l2"> <div> </div> <div> <div class="but"> </div> </div> </div> <div class="l3"> Long description <a class="underlinepix_red no_wrap" rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> more<img alt="" src="/b/img/arr_red_sm.gif"> </a> </div> </div> </div> <div class="omit"></div> <div class="row "> <div class="photo"> <a rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534899,p"> <img alt="alt msg" src="/b/s/b9/03/b9038292d147a582add07ee1f06078222.jpg"> </a> </div> <div class="desc"> <div class="l1"> <div class="icons"> </div> <table cellspacing="0" cellpadding="0" border="0"> <tbody> <tr> <td> <div class="fleft"> <a class="underline red bold" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod5653489225,p"> Culture And Gender <br>Intimate Relation</a> </div> <div class="fleft"> </div> </td> </tr> </tbody> </table> </div> <div class="l2"> <div> </div> <div> <div class="but"> </div> </div> </div> <div class="l3"> Long description <a class="underlinepix_red no_wrap" rel="nofollow" href="/searchClickThru?pid=prod56534895&amp;q=&amp;rpos=109181&amp;rpp=10&amp;_dyncharset=UTF-8&amp;sort=&amp;url=/culture-and-gender-intimate-relation-ksiazka,prod56534895,p"> more<img alt="" src="/b/img/arr_red_sm.gif"> </a> </div> </div> </div> Can anybody help me to create suitable reg ex?

    Read the article

  • Cannot add namespace prefix to children using XSL

    - by Erdal
    I checked many answers here and I think I am almost there. One thing that is bugging me (and for some reason my peer needs it) follows: I have the following input XML: <?xml version="1.0" encoding="utf-8"?> <MyRoot> <MyRequest CompletionCode="0" CustomerID="9999999999"/> <List TotalList="1"> <Order CustomerID="999999999" OrderNo="0000000001" Status="Shipped"> <BillToAddress ZipCode="22221"/> <ShipToAddress ZipCode="22222"/> <Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </Order> </List> <Errors/> </MyRoot> I was asked to produce this: <ns:MyNewRoot xmlns:ns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <N1:MyRequest CompletionCode="0" CustomerID="9999999999"/> <ns:List TotalList="1"> <N2:Order CustomerID="999999999" Level="Preferred" Status="Shipped"> <N2:BillToAddress ZipCode="22221"/> <N2:ShipToAddress ZipCode="22222"/> <N2:Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </N2:Order> </ns:List> <ns:Errors/> </ns:MyNewRoot> Note the children of the N2:Order also needs N2: prefix as well as the ns: prefix for the rest of the elements. I use the XSL transformation below: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:template match="/MyRoot"> <MyNewRoot xmlns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <xsl:apply-templates/> </MyNewRoot> </xsl:template> <xsl:template match="/MyRoot/MyRequest"> <xsl:element name="N1:{name()}" namespace="http://schemas.foo.com/request"> <xsl:copy-of select="namespace::*"/> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="/MyRoot/List/Order"> <xsl:element name="N2:{name()}" namespace="http://schemas.foo.com/details"> <xsl:copy-of select="namespace::*"/> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> </xsl:stylesheet> This one doesn't process the ns (I couldn't figure this out). When I process thru the above the XSL transformation with AltovaXML I end up with below: <MyNewRoot xmlns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <N1:MyRequest CompletionCode="0" CustomerID="9999999999"/> <List xmlns="" TotalList="1"> <N2:Order CustomerID="999999999" Level="Preferred" Status="Shipped"> <BillToAddress ZipCode="22221"/> <ShipToAddress ZipCode="22222"/> <Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </N2:Order> </List> <Errors/> </MyNewRoot> Note that N2: prefix for the children of Order is not there after the XSL transformation. Also additional xmlns="" in the Order header (for some reason). I couldn't figure out putting the ns: prefix for the rest of the elements (like Errors and List). First of all, why would I need to put the prefix for the children if the parent already has it. Doesn't the parent namespace dictate the children nodes/attribute namespaces? Secondly, I want to add the prefixes in the above XML as expected, how can I do that with XSL?

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >