Search Results

Search found 33382 results on 1336 pages for 'end tag'.

Page 17/1336 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • U1 music mp3 files not put into albums

    - by david
    Via the web page I can see that my files sync to U1 cloud servers. For the mp3 files, there seems to be a problem that several questions have already addressed but there does not seem to be a clear answer. If I use EasyTAG 2.1.6, I can see the ID3 tags on the local files and they seem to correctly define the artist, album title and track name. I expect it is not relevant, but I am using 10.04 with several different clients to rip the CDs. However, some mp3 files do not appear in the cloud at all and some others get assigned to Various Artist or Unknown artist. Does the music streaming (e.g. via Ipad) use the tags or the directory/file structure to assign the artist or album, and how quickly should it be expected to work? :-) Which version of ID3 tags does U1 music streaming work best with or prefer? thanks for any help David

    Read the article

  • Xen VPS web front end manager

    - by JavaRocky
    Do any Xen VPS web front end managers exist for provisioning and managing virtual private servers? I am comfortable on the command line creating new virtual machines, however it would be nice if i could use a nice web front end.

    Read the article

  • A music player that can handle multiple artist tags

    - by Keidax
    The mp3 format can handle multiple artists per track (in the form of "artist1\artist2"), and as far as I know other modern music formats can do the same thing. However, Rhythmbox (my default music player) seems to be capable of only reading the first artist. Are there any music players that can read and sort songs with multiple artists, or a plugin for Rhythmbox that can provide this functionality?

    Read the article

  • Mediamonkey ratings imported into Rhythmbox/Banshee/Quod Libet

    - by JoshK
    I recently made a complete shift from Windows 7 to Ubuntu 12.04. Everything went smoothly until I scanned my music files in some of the Ubuntu players... All of my ratings added in Windows, using Mediamonkey, were out-of-five-stars (some with a half-star precision) - but when imported into RhythmBox, Banshee and Quod Libet, they all changed to a out-of-four-stars rating, with no five-star songs and no indication of how the old ratings were mapped into the new system. Does anyone know how to fix this? Even getting to know which way the ratings were mapped from 5-star system (with a half-star increment) to a 4-star basis will be very helpful. Thank you!

    Read the article

  • Will Google penalize my website if I hide the H1 tag?

    - by mickburkejnr
    I've read an article today where the author stated that if you put keywords on to your page but then hide them with CSS, Google will penalize your site. This make sense. This got me thinking though about my own technique when I build a website. If for example when I build a website and the logo contains the name of the website, I tend to put the name of the website in a H1 tag and then hide this tag. I don't know why I do it, I've always done it. I also include any text held in an image in the alt attribute of the img tag. But because I am hiding the H1 tag, does this leave me open to Google penalizing the website because I've hidden this one tag?

    Read the article

  • Matlab Simulation: Point (symbol) Moving from start point to end point and back

    - by niko
    Hi, I would like to create an animation to demonstrate LDPC coding which is based on Sum-Product Algorithm So far I have created a graph which shows the connections between symbol nodes (left) and parity nodes (right) and would like to animate points travelling from symbol to parity nodes and back. The figure is drawn by executing the following method: function drawVertices(H) hold on; nodesCount = size(H); parityNodesCount = nodesCount(1); symbolNodesCount = nodesCount(2); symbolPoints = zeros(symbolNodesCount, 2); symbolPoints(:, 1) = 0; for i = 0 : symbolNodesCount - 1 ji = symbolNodesCount - i; scatter(0, ji) symbolPoints(i + 1, 2) = ji; end; parityPoints = zeros(parityNodesCount, 2); parityPoints(:, 1) = 10; for i = 0 : parityNodesCount - 1 ji = parityNodesCount - i; y0 = symbolNodesCount/2 - parityNodesCount/2; scatter(10, y0 + ji) parityPoints(i + 1, 2) = y0 + ji; end; axis([-1 11 -1 symbolNodesCount + 2]); axis off %connect vertices d = size(H); for i = 1 : d(1) for j = 1 : d(2) if(H(i, j) == 1) plot([parityPoints(i, 1) symbolPoints(j, 1)], [parityPoints(i, 2) symbolPoints(j, 2)]); end; end; end; So what I would like to do here is to add another method which takes start point (x and y) and end point as arguments and animates a travelling circle (dot) from start to end and back along the displayed lines. I would appreciate if anyone of you could show the solution or suggest any useful tutorial about matlab simulations. Thank you!

    Read the article

  • Grabbing rows from MySql where current date is in between start date and end date (Check if current date lies between start date and end date)

    - by Jordan Parker
    I'm trying to select from the database to get the "campaigns" that dates fall into the month. So far i've been successful in grabbing rows that starts or ends inside the current month. What I need to do now is select rows that start in one month and ends a few months down the line ( EG: It's the 3rd month in the year, and there's a "campaign" that runs from the 1st month until the 5th. other example There is a "campaign" that runs from 2012 until 2013 ) I'm hoping there is some way to select via MySql all rows in which a capaign may run. If not should I grab all data in the database and only show the ones that run via the current month. I have already made a function that displays all the days inbetween each date inside an array, which is called "dateRange". I've also created another which shows how many days the campaign runs for called "runTime". Select all (Obviously) $result = mysql_query("SELECT * FROM campaign"); Select Starting This Month $result = mysql_query("SELECT * FROM campaign WHERE YEAR( START ) = YEAR( CURDATE( ) ) AND MONTH( START ) = MONTH( CURDATE( ) )"); Select Ending This Month $result = mysql_query("SELECT * FROM campaign WHERE YEAR( END ) = YEAR( CURDATE( ) ) AND MONTH( END ) = MONTH( CURDATE( ) ) LIMIT 0 , 30"); Code sample while($row = mysql_fetch_array($result)) { $dateArray = dateRange($row['start'], $row['end']); echo "<h3>" . $row['campname'] . "</h3> Start " . $row['start'] . "<br /> End " . $row['end']; echo runTime($row['start'], $row['end']); print_r($dateArray); } In regards to the dates, MySql database only holds start date and end date of the campaign.

    Read the article

  • Stop writing blank line at the end of CSV file (using MATLAB)

    - by Grant M.
    Hello all ... I'm using MATLAB to open a batch of CSV files containing column headers and data (using the 'importdata' function), then I manipulate the data a bit and write the headers and data to new CSV files using the 'dlmwrite' function. I'm using the '-append' and 'newline' attributes of 'dlmwrite' to add each line of text/data on a new line. Each of my new CSV files has a blank line at the end, whereas this blank line was not there before when I read in the data ... and I'm not using 'newline' on my final call of 'dlmwrite'. Does anyone know how I can keep from writing this blank line to the end of my CSV files? Thanks for your help, Grant EDITED 5/18/10 1:35PM CST - Added information about code and text file per request ... you'll notice after performing the procedure below that there appears to be a carriage return at the end of the last line in the new text file. Consider a text file named 'textfile.txt' that looks like this: Column1, Column2, Column3, Column4, Column 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 Here's a sample of the code I am using: % import data importedData = importdata('textfile.txt'); % manipulate data importedData.data(:,1) = 100; % store column headers into single comma-delimited % character array (for easy writing later) columnHeaders = importedData.textdata{1}; for counter = 2:size(importedData.textdata,2) columnHeaders = horzcat(columnHeaders,',',importedData.textdata{counter}); end % write column headers to new file dlmwrite('textfile_updated.txt',columnHeaders,'Delimiter','','newline','pc') % append all but last line of data to new file for dataCounter = 1:(size(importedData.data,2)-1) dlmwrite('textfile_updated.txt',importedData.data(dataCounter,:),'Delimiter',',','newline','pc','-append') end % append last line of data to new file, not % creating new line at end dlmwrite('textfile_updated.txt',importedData.data(end,:),'Delimiter',',','-append')

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Microsoft SQL Server 2005 Function, passing list of start and end times

    - by Kevin
    I'd like to do had a dynamic number of one start/end time pairs passed to a function as an input parameter. The function would then use the list instead of just one start, and one end time in a select statement. CREATE FUNCTION [dbo].[GetData] ( @StartTime datetime, @EndTime datetime ) RETURNS int AS BEGIN SELECT @EndTime = CASE WHEN @EndTime > CURRENT_TIMESTAMP THEN CURRENT_TIMESTAMP ELSE @EndTime END DECLARE @TempStates TABLE (StartTime datetime NOT NULL , EndTime datetime NOT NULL , StateIdentity int NOT NULL ) INSERT INTO @TempStates SELECT StartTime , EndTime , StateIdentity FROM State WHERE StartTime <= @EndTime AND EndTime >= @StartTime RETURN 0 END

    Read the article

  • Sort Order With End Year and Start Year

    - by Maletor
    I'm looking to write a comparator to sort my items in a list. For items without an end year they should be at the top. For items with an end year they should be next. For items with the same end year the one with the lowest start year should be next. Something I have so far [item.get('end_year'), item.get('start_year')] Test sceanrios first is end year second is start year ("" is present) "", "" "", 2012 "", 2011 2012, 2005 2012, 2008 2011, 2011 2010, 2005

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • Deserialize complex JSON (VB.NET)

    - by Ssstefan
    I'm trying to deserialize json returned by some directions API similar to Google Maps API. My JSON is as follows (I'm using VB.NET 2008): jsontext = { "version":0.3, "status":0, "route_summary": { "total_distance":300, "total_time":14, "start_point":"43", "end_point":"42" }, "route_geometry":[[51.025421,18.647631],[51.026131,18.6471],[51.027802,18.645639]], "route_instructions": [["Head northwest on 43",88,0,4,"88 m","NW",334.8],["Continue on 42",212,1,10,"0.2 km","NW",331.1,"C",356.3]] } So far I came up with the following code: Dim js As New System.Web.Script.Serialization.JavaScriptSerializer Dim lstTextAreas As Output_CloudMade() = js.Deserialize(Of Output_CloudMade())(jsontext) I'm not sure how to define complex class, i.e. Output_CloudMade. I'm trying something like: Public Class RouteSummary Private mTotalDist As Long Private mTotalTime As Long Private mStartPoint As String Private mEndPoint As String Public Property TotalDist() As Long Get Return mTotalDist End Get Set(ByVal value As Long) mTotalDist = value End Set End Property Public Property TotalTime() As Long Get Return mTotalTime End Get Set(ByVal value As Long) mTotalTime = value End Set End Property Public Property StartPoint() As String Get Return mStartPoint End Get Set(ByVal value As String) mStartPoint = value End Set End Property Public Property EndPoint() As String Get Return mEndPoint End Get Set(ByVal value As String) mEndPoint = value End Set End Property End Class Public Class Output_CloudMade Private mVersion As Double Private mStatus As Long Private mRSummary As RouteSummary 'Private mRGeometry As RouteGeometry 'Private mRInstructions As RouteInstructions Public Property Version() As Double Get Return mVersion End Get Set(ByVal value As Double) mVersion = value End Set End Property Public Property Status() As Long Get Return mStatus End Get Set(ByVal value As Long) mStatus = value End Set End Property Public Property Summary() As RouteSummary Get Return mRSummary End Get Set(ByVal value As RouteSummary) mRSummary = value End Set End Property 'Public Property Geometry() As String ' Get ' End Get ' Set(ByVal value As String) ' End Set 'End Property 'Public Property Instructions() As String ' Get ' End Get ' Set(ByVal value As String) ' End Set 'End Property End Class but it does not work. The problem is with complex properties, like route_summary. It is filled with "nothing". Other properties, like "status" or "version" are filled properly. Any ideas, how to define class for the above JSON? Can you share some working code for deserializing JSON in VB.NET? Thanks,

    Read the article

  • link_to passing paramater and display problem - tag feature - Ruby on Rails

    - by bgadoci
    I have gotten a great deal of help from KandadaBoggu on my last question and very very thankful for that. As we were getting buried in the comments I wanted to break this part out. I am attempting to create a tag feature on the rails blog I am developing. The relationship is Post has_many :tags and Tag belongs_to :post. Adding and deleting tags to posts are working great. In my /view/posts/index.html.erb I have a section called tags where I am successfully querying the Tags table, grouping them and displaying the count next to the tag_name (as a side note, I mistakenly called the column containing the tag name, 'tag_name' instead of just 'name' as I should have) . In addition the display of these groups are a link that is referencing the index method in the PostsController. That is where the problem is. When you navigate to /posts you get an error because there is no parameter being passed (without clicking the tag group link). I have the .empty? in there so not sure what is going wrong here. Here is the error and code: Error You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.empty? /views/posts/index.html.erb <% @tag_counts.each do |tag_name, tag_count| %> <tr> <td><%= link_to(tag_name, posts_path(:tag_name => tag_name)) %></td> <td>(<%=tag_count%>)</td> </tr> <% end %> PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name].empty? ? {}: { :tags => { :tag_name => params[:tag_name] }} ) ) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • How can I transition from a front-end career to back-end career?

    - by jexx2345
    Hello, In 2004, I received my B.S. in Computer Science using mostly Java for programming. Since then, I have been hired for purely front-end positions in large companies through recruiters, doing primarily HTML/CSS, Javascript/jQuery, and OOP Actionscript 3. While I definitely have respect for the front-end, and have learned much, I feel very unfulfilled and would love to move into back-end for the more complex programming I was doing in school. My target platform of choice is ASP.Net 3.5 (C#). With a resume that has absolutely no .Net or back-end experience, how can I transition into a junior back-end job? I am currently freelancing a .Net e-commerce site, and plan to build a portfolio showcasing some apps (e-commerce mvc, blog, etc), while learning the technology. Is having a Bachelors in CS enough or should I look into getting .Net certified? Is showing course-work still relevant to an employer even though it was from 6 years ago? Thank you

    Read the article

  • Access DB with SQL Server Back End

    - by uyuni99
    I have an old Access application that has a lot of code in forms and reports. The database is getting too large and I am thinking of moving the back end to SQL Server. My requirements are as follows: The DB needs to be multiuser and the users (3-5) will need to log in over the web I would prefer not to re-write the forms and reports in ASP or some other web front end. When I think about my choices, I see them as: Have an Access ADP front end and allows remote log-in to the server where it is stored. Not sure if it is possible for 2 users to simultaneously log in Distribute an ADP front end to the users, but I am not sure if it is possible to connect to a SQL Server back end over the internet, and the network traffic may be an issue. Any other solution? I appreciate all help. u

    Read the article

  • Access DB with SQL Server Front End

    - by uyuni99
    I have an old Access application that has a lot of code in forms and reports. The database is getting too large and I am thinking of moving the back end to SQL Server. My requirements are as follows: The DB needs to be multiuser and the users (3-5) will need to log in over the web I would prefer not to re-write the forms and reports in ASP or some other web front end. When I think about my choices, I see them as: Have an Access ADP front end and allows remote log-in to the server where it is stored. Not sure if it is possible for 2 users to simultaneously log in Distribute an ADP front end to the users, but I am not sure if it is possible to connect to a SQL Server back end over the internet, and the network traffic may be an issue. Any other solution? I appreciate all help. u

    Read the article

  • Adding RSS to tags in Orchard

    - by Bertrand Le Roy
    A year ago, I wrote a scary post about RSS in Orchard. RSS was one of the first features we implemented in our CMS, and it has stood the test of time rather well, but the post was explaining things at a level that was probably too abstract whereas my readers were expecting something a little more practical. Well, this post is going to correct this by showing how I built a module that adds RSS feeds for each tag on the site. Hopefully it will show that it's not very complicated in practice, and also that the infrastructure is pretty well thought out. In order to provide RSS, we need to do two things: generate the XML for the feed, and inject the address of that feed into the existing tag listing page, in order to make the feed discoverable. Let's start with the discoverability part. One might be tempted to replace the controller or the view that are responsible for the listing of the items under a tag. Fortunately, there is no need to do any of that, and we can be a lot less obtrusive. Instead, we can implement a filter: public class TagRssFilter : FilterProvider, IResultFilter .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } On this filter, we can implement the OnResultExecuting method and simply check whether the current request is targeting the list of items under a tag. If that is the case, we can just register our new feed: public void OnResultExecuting(ResultExecutingContext filterContext) { var routeValues = filterContext.RouteData.Values; if (routeValues["area"] as string == "Orchard.Tags" && routeValues["controller"] as string == "Home" && routeValues["action"] as string == "Search") { var tag = routeValues["tagName"] as string; if (! string.IsNullOrWhiteSpace(tag)) { var workContext = _wca.GetContext(); _feedManager.Register( workContext.CurrentSite + " – " + tag, "rss", new RouteValueDictionary { { "tag", tag } } ); } } } The registration of the new feed is just specifying the title of the feed, its format (RSS) and the parameters that it will need (the tag). _wca and _feedManager are just instances of IWorkContextAccessor and IFeedManager that Orchard injected for us. That is all that's needed to get the following tag to be added to the head of our page, without touching an existing controller or view: <link rel="alternate" type="application/rss+xml" title="VuLu - Science" href="/rss?tag=Science"/> Nifty. Of course, if we navigate to the URL of that feed, we'll get a 404. This is because no implementation of IFeedQueryProvider knows about the tag parameter yet. Let's build one that does: public class TagFeedQuery : IFeedQueryProvider, IFeedQuery IFeedQueryProvider has one method, Match, that we can implement to take over any feed request that has a tag parameter: public FeedQueryMatch Match(FeedContext context) { var tagName = context.ValueProvider.GetValue("tag"); if (tagName == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } This is just saying that if there is a tag parameter, we will handle it. All that remains to be done is the actual building of the feed now that we have accepted to handle it. This is done by implementing the Execute method of the IFeedQuery interface: public void Execute(FeedContext context) { var tagValue = context.ValueProvider.GetValue("tag"); if (tagValue == null) return; var tagName = (string)tagValue.ConvertTo(typeof (string)); var tag = _tagService.GetTagByName(tagName); if (tag == null) return; var site = _services.WorkContext.CurrentSite; var link = new XElement("link"); context.Response.Element.SetElementValue("title", site.SiteName + " - " + tagName); context.Response.Element.Add(link); context.Response.Element.SetElementValue("description", site.SiteName + " - " + tagName); context.Response.Contextualize(requestContext => link.Add(GetTagUrl(tagName, requestContext))); var items = _tagService.GetTaggedContentItems(tag.Id, 0, 20); foreach (var item in items) { context.Builder.AddItem(context, item.ContentItem); } } This code is resolving the tag content item from its name and then gets content items tagged with it, using the tag services provided by the Orchard.Tags module. Then we add those items to the feed. And that is it. To summarize, we handled the request unobtrusively in order to inject the feed's link, then handled requests for feeds with a tag parameter and generated the list of items for that tag. It remains fairly simple and still it is able to handle arbitrary content types. That makes me quite happy about our little piece of over-engineered code from last year. The full code for this can be found in the Vandelay.TagCloud module: http://orchardproject.net/gallery/List/Modules/ Orchard.Module.Vandelay.TagCloud/1.2

    Read the article

  • home, end, delete, pageup, pagedown with ksh

    - by Nicolas
    Hello. I want to use home, end, delete, pageup, pagedown with ksh. My TERM is xterm-color. These keys works fine with tcsh and zsh, but not with ksh (print a tilda ~) I found this: bind '^[[3'=prefix-2 bind '^[[3~'=delete-char-forward bind '^[[1'=prefix-2 bind '^[[1~'=beginning-of-line bind '^[[4'=prefix-2 bind '^[[4~'=end-of-line But when I set one bindkey, the last does not work anymore. How can I use these keys in ksh with a .kshrc ? Thanks.

    Read the article

  • Fn + Right Arrow on MacBook Pro does not work as END Key for SuperUser web site

    - by George2
    Hello everyone, I am using a MacBook Pro running Mac OS X 10.5. I am new to this development environment, and previously worked on Windows. I have tried that Fn + Right arrow on Mac works as "END" key on Windows -- which makes cursor move to the end of line, for most application like Word for Mac -- I have tried it works. But I tried Fn + Right Arrow does not work for current question input box of SuperUser. I am wondering how to solve this issue and why? Does anyone have the same issue for me? thanks in advance, George

    Read the article

  • How to make HOME or END keys work in mc running on OS X (ssh)

    - by Sorin Sbarnea
    I installed MacPorts on OS X 10.5 and I found out that when I connect to the computer using SSH and use mc - Midnight Commander - the HOME and END keys do not work. I have to mention that I'm using putty and I am able to use the keyboard very well on Linux machines like Fedora, Ubuntu,... Here is putty keyboard configuration (a configuration I found to be optimal over time): Backspace key: 127 Home/End keys: Standard Function keys: Xterm R6 Cursor keys: Normal Numpad: normal Terminal type string: xterm-color I'm looking for a command line solution/script that does these changes, this make much easier to create a prepare OS script for configuring a new OS.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >