Search Results

Search found 7061 results on 283 pages for 'target'.

Page 32/283 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Social Targeting: This One's Just for You

    - by Mike Stiles
    Think of social targeting in terms of the archery competition we just saw in the Olympics. If someone loaded up 5 arrows and shot them straight up into the air all at once, hoping some would land near the target, the world would have united in laughter. But sadly for hysterical YouTube video viewing, that’s not what happened. The archers sought to maximize every arrow by zeroing in on the spot that would bring them the most points. Marketers have always sought to do the same. But they can only work with the tools that are available. A firm grasp of the desired target does little good if the ad products aren’t there to deliver that target. On the social side, both Facebook and Twitter have taken steps to enhance targeting for marketers. And why not? As the demand to monetize only goes up, they’re quite motivated to leverage and deliver their incredible user bases in ways that make economic sense for advertisers. You could target keywords on Twitter with promoted accounts, and get promoted tweets into search. They would surface for your followers and some users that Twitter thought were like them. Now you can go beyond keywords and target Twitter users based on 350 interests in 25 categories. How does a user wind up in one of these categories? Twitter looks at that user’s tweets, they look at whom they follow, and they run data through some sort of Twitter secret sauce. The result is, you have a much clearer shot at Twitter users who are most likely to welcome and be responsive to your tweets. And beyond the 350 interests, you can also create custom segments that find users who resemble followers of whatever Twitter handle you give it. That means you can now use boring tweets to sell like a madman, right? Not quite. This ad product is still quality-based, meaning if you’re not putting out tweets that lead to interest and thus, engagement, that tweet will earn a low quality score and wind up costing you more under Twitter’s auction system to maintain. That means, as the old knight in “Indiana Jones and the Last Crusade” cautions, “choose wisely” when targeting based on these interests and categories to make sure your interests truly do line up with theirs. On the Facebook side, they’re rolling out ad targeting that uses email addresses, phone numbers, game and app developers’ user ID’s, and eventually addresses for you bigger brands. Why? Because you marketers asked for it. Here you were with this amazing customer list but no way to reach those same customers should they be on Facebook. Now you can find and communicate with customers you gathered outside of social, and use Facebook to do it. Fair to say such users are a sensible target and will be responsive to your message since they’ve already bought something from you. And no you’re not giving your customer info to Facebook. They’ll use something called “hashing” to make sure you don’t see Facebook user data (beyond email, phone number, address, or user ID), and Facebook can’t see your customer data. The end result, social becomes far more workable and more valuable to marketers when it delivers on the promise that made it so exciting in the first place. That promise is the ability to move past casting wide nets to the masses and toward concentrating marketing dollars efficiently on the targets most likely to yield results.

    Read the article

  • A* algorithm very slow

    - by Amaranth
    I have an programming a RTS game (I use XNA with C#). The pathfinding is working fine, except that when it has a lot of node to search in, there is a lag period of one or two seconds, it happens mainly when there is no path to the target destination, since it that situation there is more nodes to explore. I have the same problem when the path is shorter but selected more than 3 units (can't take the same path since the selected units can be in different part of the map). private List<NodeInfo> FindPath(Unit u, NodeInfo start, NodeInfo end) { Map map = GameInfo.GetInstance().GameMap; _nearestToTarget = start; start.MoveCost = 0; Vector2 endPosition = map.getTileByPos(end.X, end.Y).Position; //getTileByPos simply gets the tile in a 2D array with the X and Y indexes start.EstimatedRemainingCost = (int)(endPosition - map.getTileByPos(start.X, start.Y).Position).Length(); start.Parent = null; List<NodeInfo> openedNodes = new List<NodeInfo>(); ; List<NodeInfo> closedNodes = new List<NodeInfo>(); Point[] movements = GetMovements(u.UnitType); openedNodes.Add(start); while (!closedNodes.Contains(end) && openedNodes.Count > 0) { //Loop in nodes to find lowest cost NodeInfo currentNode = FindLowestCostOpenedNode(openedNodes); openedNodes.Remove(currentNode); closedNodes.Add(currentNode); Vector2 previousMouvement; if (currentNode.Parent == null) { previousMouvement = ConvertRotationToDirectionVector(u.Rotation); } else { previousMouvement = map.getTileByPos(currentNode.X, currentNode.Y).Position - map.getTileByPos(currentNode.Parent.X, currentNode.Parent.Y).Position; previousMouvement.Normalize(); } //For each neighbor foreach (Point movement in movements) { Point exploredGridPos = new Point(currentNode.X + movement.X, currentNode.Y + movement.Y); //Checks if valid move and checks if not if closed nodes list if (ValidNavigableNode(u.UnitType, new Point(currentNode.X, currentNode.Y), exploredGridPos) && !closedNodes.Contains(_gridMap[exploredGridPos.Y, exploredGridPos.X])) { NodeInfo exploredNode = _gridMap[exploredGridPos.Y, exploredGridPos.X]; Tile.TileType exploredTerrain = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).TerrainType; if(openedNodes.Contains(exploredNode)) { int newCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); if (newCost < exploredNode.MoveCost) { exploredNode.Parent = currentNode; exploredNode.MoveCost = newCost; //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); } } else { exploredNode.Parent = currentNode; exploredNode.MoveCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); Vector2 exploredNodeWorldPos = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).Position; exploredNode.EstimatedRemainingCost = (int)(endPosition - exploredNodeWorldPos).Length(); //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); openedNodes.Add(exploredNode); } } } } return closedNodes; } After that, I simply check if the end node is contained in the returned nodes. If so, I add the end node and each parent until I reach the start. If not, I add the nearestToTarget and each parent until I reach the start. I added a condition before calling FindPath so that only one unit can call a find path each frame (60 frame per second), but it makes no difference. I thought maybe I could solve this by allowing the find path to run in background while the game continues to run correctly, even if it takes a few frame (it is currently sequential sonce it is called in the update() of the unit if there's a target location but no path), but I don't really know how... I also though about sorting my opened nodes list by cost so I don't have to loop them, but I don't know if that would have an effect on the performance... Would there be other solutions? P.S. In the code, when I get the Move Cost, I check if the unit has to turn to perform the move, and the terrain type, nothing hard to do.

    Read the article

  • MSBuild Validating Properties

    - by Brian Gillespie
    I'm working on a reusable MSBuild Target that will be consumed by several other tasks. This target requires that several properties be defined. What's the best way to validate that properties are defined, throwing an Error if the are not? Two attempts that I almost like: <?xml version="1.0" encoding="utf-8" ?> <Project ToolsVersion="3.5" DefaultTarget="Release" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Release"> <Error Text="Property PropA required" Condition="'$(PropA)' == ''"/> <Error Text="Property PropB required" Condition="'$(PropB)' == ''"/> <!-- The body of the task --> </Target> </Project> Here's an attempt at batching. It's ugly because of the extra "Name" parameter. Is it possible to use the Include attribute instead? <?xml version="1.0" encoding="utf-8" ?> <Project ToolsVersion="3.5" DefaultTarget="Release" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Release"> <!-- MSBuild BuildInParallel="true" Projects="@(ProjectsToBuild)"/ --> <ItemGroup> <RequiredProperty Include="PropA"><Name>PropA</Name></RequiredProperty> <RequiredProperty Include="PropB"><Name>PropB</Name></RequiredProperty> <RequiredProperty Include="PropC"><Name>PropC</Name></RequiredProperty> </ItemGroup> <Error Text="Property %(RequiredProperty.Name) required" Condition="'$(%(RequiredProperty.Name))' == ''" /> </Target> </Project>

    Read the article

  • AjaxControlToolkit Resource Files Not Copied To Output in MSBuild Script

    - by Dario Solera
    I'm new to MSBuild, but I managed to setup the following simple script: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> </PropertyGroup> <ItemGroup> <SolutionRoot Include=".." /> <BuildArtifacts Include=".\Artifacts\" /> <SolutionFile Include="..\SolutionName.sln" /> </ItemGroup> <Target Name="Clean"> <RemoveDir Directories="@(BuildArtifacts)" /> </Target> <Target Name="Init" DependsOnTargets="Clean"> <MakeDir Directories="@(BuildArtifacts)" /> </Target> <Target Name="Compile" DependsOnTargets="Init"> <MSBuild Projects="@(SolutionFile)" Properties="OutDir=%(BuildArtifacts.FullPath);Configuration=$(Configuration)" /> <MakeDir Directories="%(BuildArtifacts.FullPath)\_PublishedWebsites\RDE.XAP.UnifiedGui.Web\Temp" /> </Target> </Project> The solution has 23 projects, 4 of which are WebApps. Now, the script works fine and the output is generated correctly. The only problem I counter is with two WebApp projects in the solution that use the AJAX Control Toolkit. The toolkit has a set of folders (e.g. ar, it, es, fr) that contain localized resources. These folders are not copied in the bin directory of the WebApps when the solution is built in MSBuild, but they are copied when it is built in Visual Studio. How can I solve this in a clean manner? I know I could write a (quite convoluted) task that copies the directories after the compile, but it does not seem the right solution to me. Also, neither Google, SO and MSDN could provide more details on this kind of issue.

    Read the article

  • controlling which project header file Xcode will include

    - by jdmuys
    My Xcode project builds to variations of the same product using two targets. The difference between the two is only on which version of an included library is used. For the .c source files it's easy to assign the correct version to the correct target using the target check box. However, including the header file always includes the same one. This is correct for one target, but wrong for the other one. Is there a way to control which header file is included by each target? Here is my project file hierarchy (which is replicated in Xcode): MyProject TheirOldLib theirLib.h theirLib.cpp TheirNewLib theirLib.h theirLib.cpp myCode.cpp and myCode.cpp does thing such as: #include "theirLib.h" … somecode() { #if OLDVERSION theirOldLibCall(…); #else theirNewLibCall(…); #endif } And of course, I define OLDVERSION for one target and not for the other. So is there a way to tell Xcode which theirLib.h to include per target? Constraints: - the two header files have the same name. As a last resort, I could rename one of them, but I'd rather avoid that as this will lead to major hair pulling on the other platforms. - I'm free to tweak my project as I otherwise see fit Thanks for any help.

    Read the article

  • Setting project for eclipse using maven

    - by egaga
    Hi, I'm trying to start modifying an existing application with Eclipse. Actually I had it working before, but I deleted the project, and now with "mvn eclipse:eclipse" I get the following: [INFO] Resource directory's path matches an existing source directory. Resources will be merged with the source directory src/main/resources [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:583) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:512) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:482) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xm l], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.plugin.eclipse.EclipseSourceDir.merge(EclipseSourceDir.java:302) at org.apache.maven.plugin.eclipse.EclipsePlugin.extractResourceDirs(EclipsePlugin.java:1605) at org.apache.maven.plugin.eclipse.EclipsePlugin.buildDirectoryList(EclipsePlugin.java:1490) at org.apache.maven.plugin.eclipse.EclipsePlugin.createEclipseWriterConfig(EclipsePlugin.java:1180) at org.apache.maven.plugin.eclipse.EclipsePlugin.writeConfiguration(EclipsePlugin.java:1043) at org.apache.maven.plugin.ide.AbstractIdeSupportMojo.execute(AbstractIdeSupportMojo.java:511) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) ... 16 more

    Read the article

  • Skip makefile dependency generation for certain targets (e.g. `clean`)

    - by Shtééf
    I have several C and C++ projects that all follow a basic structure I've been using for a while now. My source files go in src/*.c, intermediate files in obj/*.[do], and the actual executable in the top level directory. My makefiles follow roughly this template: # The final executable TARGET := something # Source files (without src/) INPUTS := foo.c bar.c baz.c # OBJECTS will contain: obj/foo.o obj/bar.o obj/baz.o OBJECTS := $(INPUTS:%.cpp=obj/%.o) # DEPFILES will contain: obj/foo.d obj/bar.d obj/baz.d DEPFILES := $(OBJECTS:%.o=%.d) all: $(TARGET) obj/%.o: src/%.cpp $(CC) $(CFLAGS) -c -o $@ $< obj/%.d: src/%.cpp $(CC) $(CFLAGS) -M -MF $@ -MT $(@:%.d=%.o) $< $(TARGET): $(OBJECTS) $(LD) $(LDFLAGS) -o $@ $(OBJECTS) .PHONY: clean clean: -rm -f $(OBJECTS) $(DEPFILES) $(RPOFILES) $(TARGET) -include $(DEPFILES) Now I'm at the point where I'm packaging this for a Debian system. I'm using debuild to build the Debian source package, and pbuilder to build the binary package. The debuild step only has to execute the clean target, but even this causes the dependency files to be generated and included. In short, my question is really: Can I somehow prevent make from generating dependencies when all I want is to run the clean target?

    Read the article

  • Maven String Replace of Text Web Resources

    - by Jaco van Niekerk
    I have a Maven web application with text files in src/main/webapp/textfilesdir As I understand it, during the package phase this textfilesdir directory will be copied into the target/project-1.0-SNAPSHOT directory, which is then zipped up into a target/project-1.0-SNAPSHOT.war Problem Now, I need to do a string replacement on the contents of the text files in target/project-1.0-SNAPSHOT/textfilesdir. This must then be done after the textfilesdir is copied into target/project-1.0-SNAPSHOT, but prior to the target/project-1.0-SNAPSHOT.war file being created. I believe this is all done during the package phase. How can a plugin (potentially maven-antrun-plugin), plug into the package phase to do this. The text files don't contain properties, like ${property-name} to filter on. String replacement is likely the only option. Options Modify the text files after the copy into target/project-1.0-SNAPSHOT directory, yet prior to the WAR creation. After packaging, extract the text files from WAR, modify them, and add them back into the WAR. I'm thinking there is another option here I'm missing. Thoughts anyone?

    Read the article

  • Printing out this week's dates in perl

    - by ABach
    Hi folks, I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works - but is it the quickest/most efficient/etc.? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • Converting currencies via intermediate currencies.

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • How can I get this week's dates in Perl?

    - by ABach
    I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in Perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works, but is it the quickest or most efficient? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • How best to calculate derived currency rate conversions using C#/LINQ?

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • How to create copying items from property values?

    - by Nam Gi VU
    Let's say I have a list of sub paths such as <PropertyGroup> <subPaths>$(path1)\**\*; $(path2)\**\*; $(path3)\file3.txt; </subPaths> </PropertyGroup> I want to copy these files from folder A to folder B (surely we already have all the sub folders/files in A). What I try was: <Target Name="Replace" DependsOnTargets="Replace_Init; Replace_Copy1Path"> </Target> <Target Name="Replace_Init"> <PropertyGroup> <subPaths>$(path1)\**\*; $(path2)\**\*; $(path3)\file3.txt; </subPaths> </PropertyGroup> <ItemGroup> <subPathItems Include="$(subPathFiles.Split(';'))" /> </ItemGroup> </Target> <Target Name="Replace_Copy1Path" Outputs="%(subPathItems.Identity)"> <PropertyGroup> <src>$(folderA)\%(subPathItems.Identity)</src> <dest>$(folderB)\%(subPathItems.Identity)</dest> </PropertyGroup> <Copy SourceFiles="$(src)" DestinationFiles="$(dest)" /> </Target> But the Copy task didn't work. It doesn't translate the *** to files. What did I do wrong? Please help!

    Read the article

  • Compiling GWT 2.6.1 at Java 7 source level

    - by Neeko
    I've recently updated my GWT project to 2.6.1, and started to make use of Java 7 syntax since 2.6 now supports Java 7. However, when I attempt to compile, I'm receiving compiler errors such as [ERROR] Line 42: '<>' operator is not allowed for source level below 1.7 How do I specify the GWT compiler to target 1.7? I was under the impression that it would do that by default, but I guess not. I've attempted cleaning the project, including deleting the gwt-unitCache directory but to no avail. Here is my Ant compile target. <target name="compile" depends="prepare"> <javac includeantruntime="false" debug="on" debuglevel="lines,vars,source" srcdir="${src.dir}" destdir="${build.dir}"> <classpath refid="project.classpath"/> </javac> </target> <target name="gwt-compile" depends="compile"> <java failonerror="true" fork="true" classname="com.google.gwt.dev.Compiler"> <classpath> <!-- src dir is added to ensure the module.xml file(s) are on the classpath --> <pathelement location="${src.dir}"/> <pathelement location="${build.dir}"/> <path refid="project.classpath"/> </classpath> <jvmarg value="-Xmx256M"/> <arg value="${gwt.module.name}"/> </java> </target>

    Read the article

  • Been asked a dozen times, but no luck from what I've read. Prevent Anchor Jumping on page load

    - by jasenmp
    I'm currently working with WP theme that can be found here: sanjay.dmediastudios.com I'm currently using 'smooth scroll' on my page, I'm attempting to have the page smoothly scroll to the requested section when coming from an external link (for instance coming from the blog page takes you to sanjay.dmediastudios.com/#portfolio) from there I want the page to start at the top and THEN scroll to the portfolio section. What's happening is it briefly displays the 'portfolio section' (anchor jump) and THEN resets to the top and scrolls down. It's driving me nuts :(. Here is the code I'm using: Click function for smooth scroll: $(function() { $('.menu li a').click(function() { if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) + ']'); if (target.length) { $root.animate({ scrollTop: target.offset().top - 75 }, 800, 'swing'); return false; } } }); //end of click function }); The page load function: $(window).on("load", function() { if (location.hash) { // do the test straight away window.scrollTo(0, 0); // execute it straight away setTimeout(function() { window.scrollTo(0, 0); // run it a bit later also for browser compatibility }, 1); } var urlHash = window.location.href.split("#")[1]; if (urlHash && $('#' + urlHash).length) $('html,body').animate({ scrollTop: $('#' + urlHash).offset().top - 75 }, 800, 'swing'); }); Any help would be MUCH appreciated.

    Read the article

  • Opera bug with JS autoselecting text (if more than 1 div)

    - by E L
    Here is HTML code. It supposed to select all text in "Container" div <B onclick="SelectText(document.getElementById('Container'));">select all text</B> <Div id="Container"> <Div>123456</Div> <Div>123456</Div> <Div onclick="SelectText();">123456</Div> </Div> here is JS code of the SelectText() function function SelectText(target){ if(target==null){ var e = window.event || e; if (!e) var e = window.event; var target=e.target || e.srcElement; } var rng, sel; if ( document.createRange ) { rng = document.createRange(); rng.selectNode( target ); sel = window.getSelection(); sel.removeAllRanges(); sel.addRange( rng ); } else { var rng = document.body.createTextRange(); rng.moveToElementText( target ); rng.select(); } } Problem is that in Opera 12.02 when "select all text" is clicked, all text seems like selected, but it's not selected (I can't rightclick it and copy). (terrific, but IE works fine with it) Why not in Opera?!!! And what can I do to make Opera 12.02 believe that all text in "Container" is selected?

    Read the article

  • Role based access control in Oracle VM using Enterprise Manager 12c

    - by Ronen Kofman
    Enterprise Managers let’s you control any element in the environment and define which users can do what on each element. We will show here an example on how to set up RBAC (Role Base Access Control) for Oracle VM using Enterprise Manager, this will be a very simplified explanation  to help you get going. For more comprehensive explanations please refer to the Enterprise Manager User Guide. OK, first some basic Enterprise Manager terminology: Target – any element in the environment is a target – server, pool, zone, VM etc. Administrators – these are the Enterprise Manager users who can login to the platform. Roles – roles are privilege profiles which could be applied to Administrators. The first step will be to discover the virtual environment and bring it in to Enterprise Manager, this process is simple and can be done in two ways: Work on your Oracle VM manager, set it up until you feel comfortable and then register it in Enterprise Manager Use Enterprise Manager and build it all from there. In both cases we will be able to see the same picture from Oracle VM and from Enterprise Manager, any change made in one will be reflected in the other. Oracle VM Manager: Enterprise Manager: Once you have your virtual environment set up in Enterprise Manager it is time to start associating VMs with users (or Administrators as they are called in Enterprise Manager). Enterprise Manager allows us to connect to multiple different identity services and import users from them but the simplest way to add Administrators is just go to setup->security->Administrators and create new Administrator. The creation wizard will walk you through several stages and allow you to assign role(s) to your newly created Administrator, using roles can really shorten the process if done multiple times. When you get to “Target Privileges” stage, scroll down to the bottom to the “Target Privileges” section. In this section you can add targets (virtual machine in our case) and define the type of privileges you would like to assign to the Administrator which you are creating. In this example I chose one of the VMs and granted full privileges to the newly created Administrator. Administrator creation wizard "Target Privileges": Now when you login as the newly created administrator, you will only see the VM that was assign to you and will be able to have full control over it. That’s it, simple and straight forward, Enterprise Manager offers many more things which I skipped here but the point is that if you need role based access control Enterprise Manager can give it to you in a very easy way. Oh and one more thing, virtualization management in Enterprise Manager has no license cost, sweet.

    Read the article

  • Designing Mobile SMS text advertising system

    - by Ramraj Edagutti
    Currently, I am working on a product where we have an SMS text advertising system, and using this, we setup advertising campaigns for clients, and later these campaigns are sent to the end users. This is very similar to Google Adwords, but targeted to Mobile users via SMS. Just to give an overview of the system Each Campaign is mapped to an advertiser Campaign has start date and end date Campaign has a filter condition(s) or query to select the target user base from our database (to whom we send Campaigns) Target user base can be fixed, for e.g send campaign to 10000 users Target user base can also be dynamic based on query condition, for e.g send campaign to users who are active and from a particular state, district, town etc. (this way user base will be keep changing on daily basis) Campaign can have multiple campaign messages Each campaign message has start date and end date Each campaign message can have multiple message texts for different locales, for e.g English,Hindi,Telugu etc After creating an advertisement campaign, we run daily night job to provision the target user base for that a particular campaign in a separate table, and another daily job runs on morning times and checks provisioned table for campaigns and targeted users and sends the campaign to users via SMS. Problem is, current UI for creating advertising campaigns is designed in a very technical manner, I mean, normal user or business owner or clients can not use the UI to create a campaign. Below are reasons why the UI is very technical in nature Filter condition(s) or query input filed, takes user ids or mobile numbers or SQL queries. Most of times or almost every time, we use big SQL queries So we end up storing SQL queries in a database for a campaign, later we use this SQL query to fetch targeted user base. For scheduling these campaigns, we have input filed on UI which takes quartz cron expression(s) ( for e.g. send campaign on "0 0 9 1-10 MAR 2012" ), again very technical in nature Normal user or business owner, can not use the UI for creating campaigns for reasons mentioned above, Currently, we ourself (developers) helping clients to setup/create campaigns. we are trying to re-design the UI to make it more user friendly so that any user can go to UI and create an advertisement campaign by himself. I am thinking of re-designing the current UI similar to Google Adwords interface, especially for selecting target users based on user geography like country, state, city etc. I also need to select users based user subscription(s), which might make system even more complex. And also, for campaign scheduling, I am thinking of using weekdays with hours. For example, I will shows Monday to Sunday on UI, and user can select the from hours, to hours etc. Any better ideas or suggestion on how to design UI in very user friendly manner and what design should be followed on server side code (we write backend code on java/jpa/spring/quartz)? And I am looking for ideas or design patterns on how to build SQL queries (using JPA/Hinernate) programmatically on server side, based on varies conditions like based on country, state, town, village, and user subscriptions.

    Read the article

  • Rotate camera around player and set new forward directions

    - by Samurai Fox
    I have a 3rd person camera which can rotate around the player. When I look at the back of the player and press forward, player goes forward. Then I rotate 360 around the player and "forward direction" is tilted for 90 degrees. So every 360 turn there is 90 degrees of direction change. For example when camera is facing the right side of the player, when I press button to move forward, I want player to turn to the left and make that the "new forward". I have Player object with Camera as child object. Camera object has Camera script. Inside Camera script there are Player and Camera classes. Player object itself, has Input Controller. Also I'm making this script for joystick/ controller primarily. My camera script so far: using UnityEngine; using System.Collections; public class CameraScript : MonoBehaviour { public GameObject Target; public float RotateSpeed = 10, FollowDistance = 20, FollowHeight = 10; float RotateSpeedPerTime, DesiredRotationAngle, DesiredHeight, CurrentRotationAngle, CurrentHeight, Yaw, Pitch; Quaternion CurrentRotation; void LateUpdate() { RotateSpeedPerTime = RotateSpeed * Time.deltaTime; DesiredRotationAngle = Target.transform.eulerAngles.y; DesiredHeight = Target.transform.position.y + FollowHeight; CurrentRotationAngle = transform.eulerAngles.y; CurrentHeight = transform.position.y; CurrentRotationAngle = Mathf.LerpAngle(CurrentRotationAngle, DesiredRotationAngle, 0); CurrentHeight = Mathf.Lerp(CurrentHeight, DesiredHeight, 0); CurrentRotation = Quaternion.Euler(0, CurrentRotationAngle, 0); transform.position = Target.transform.position; transform.position -= CurrentRotation * Vector3.forward * FollowDistance; transform.position = new Vector3(transform.position.x, CurrentHeight, transform.position.z); Yaw = Input.GetAxis("Right Horizontal") * RotateSpeedPerTime; Pitch = Input.GetAxis("Right Vertical") * RotateSpeedPerTime; transform.Translate(new Vector3(Yaw, -Pitch, 0)); transform.position = new Vector3(transform.position.x, transform.position.y, transform.position.z); transform.LookAt(Target.transform); } }

    Read the article

  • MSBuild: convert relative path in imported project to absolute path.

    - by Ergwun
    Short version: I have an MSBuild project that imports another project. There is a property holding a relative path in the imported project that is relative to the location of the imported project. How do I convert this relative path to be absolute? I've tried the ConvertToAbsolutePath task, but this makes it relative to the importing project's location). Long version: I'm trying out Robert Koritnik's MSBuild task for integrating nunit output into Visual Studio (see this other SO question for a link). Since I like to have all my tools under version control, I want the target file with the custom task in it to point to the nunit console application using a relative path. My problem is that this relative path ends up being made relative to the importing project. E.g. (in ... MyRepository\Third Party\NUnit\MSBuild.NUnit.Task.Source\bin\Release\MSBuild.NUnit.Task.Targets): ... <PropertyGroup Condition="'$(NUnitConsoleToolPath)' == ''"> <NUnitConsoleToolPath>..\..\..\NUnit 2.5.5\bin\net-2.0</> </PropertyGroup> ... <Target Name="IntegratedTest"> <NUnitIntegrated TreatFailedTestsAsErrors="$(NUnitTreatFailedTestsAsErrors)" AssemblyName="$(AssemblyName)" OutputPath="$(OutputPath)" ConsoleToolPath="$(NUnitConsoleToolPath)" ConsoleTool="$(NUnitConsoleTool)" /> </Target> ... The above target fails with the error that the file cannot be found (that is the nunit-console.exe file). Inside the NUnitIntegrated MSBuild task, when the the execute() method is called, the current directory is the directory of the importing project, so relative paths will point to the wrong location. I tried to convert the relative path to absolute by adding these tasks to the IntegratedTest target: <ConvertToAbsolutePath Paths="$(NUnitConsoleToolPath)"> <Output TaskParameter="AbsolutePaths" PropertyName="AbsoluteNUnitConsoleToolPath"/> </ConvertToAbsolutePath> but this just converted it to be relative to the directory of the project file that imports this target file. I know I can use the property $(MSBuildProjectDirectory) to get the directory of the importing project, but can't find any equivalent for directory of the imported target file. Can anyone tell me how a path in an imported file that is supposed to be relative to the directory that the imported file is in can be made absolute? Thanks!

    Read the article

  • XCode linking error when targeting armv7.

    - by Tom
    I've already spent countless hours puzzling over this, utilizing Google searches and other Stack Overflow questions to no avail. I have an iPhone/iPad universal application, which seems to compile fine when the target is armv6. However, when the device is iPad, I get this warning: warning: building for SDK 'Device - iPhone OS 3.2' requires an armv7 architecture. Oddly enough, the app still runs great on iPad in spite of this warning. However, I do want to do things the "right way" what ever that means in this case. When I switch the target architecture to armv7, I get linking errors: "___restore_vfp_d8_d15_regs", referenced from: *redacted* "___save_vfp_d8_d15_regs", referenced from: *redacted* ld: symbol(s) not found collect2: ld returned 1 exit status The "redacted" portions of the errors are references to the static library to which I'm trying to link. Here's what I've tried from the many suggestions online. Each of these were suggested more than once without any explanation, which leads me to believe nobody quite understands this problem: "Never use the drop down menu in the upper left of the XCode window to choose the target. Instead, set this to Base SDK and then the Base SDK to iPhone OS 3.0 in the target configuration. Set the target device to your preferred target (iPad, iPhone OS 3.2 in my situation.)" This yields the error "Library not found for -lcrt1.3.1.o" "Make sure that GCC isn't linking against the wrong version of the standard library. (You'll have to make sure the LIBRARY_SEARCH_PATH doesn't have the wrong path in it.)" My LIBRARY_SEARCH_PATH is already empty, so this doesn't seem relevant. "Try compiling with GCC 4.0 rather than GCC 4.2." I get a syntax error inside a UIKit header file. The error is "Syntax error before 'AT_NAME' token." The line is "UIKIT_EXTERN @interface UILocalizedIndexedCollation : NSObject." Another project compiles just fine with the same target settings, which is really making me question my sanity. Could I be dealing with a corrupt XCode project? If anyone knows what's actually happening and has a reference or doesn't mind explaining it, I would be so very grateful. Cheers!

    Read the article

  • MSBuild command-line error - Silverlight 4 SDK is not installed

    - by Ned
    My MSBuild command line is as follows: msbuild e:\code\myProject.csproj /p:Configuration=Debug /p:OutputPath=bin/Debug /p:Platform=x86 /p:PlatformTarget=x86 The project builds fine on my development machine in VS2010 but not with the command above. I am running Win 7 64 - Bit. I'm getting an error that says I don't have the Silverlight 4 SDK installed but I do. I"ve read some posts that you have to set the Platform=x86 but to no avail. Here is the error message in full: Microsoft (R) Build Engine Version 4.0.30319.1 [Microsoft .NET Framework, Version 4.0.30319.1] Copyright (C) Microsoft Corporation 2007. All rights reserved. Build started 6/8/2010 4:03:38 PM. Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010 .web.csproj" on node 1 (default targets). GenerateTargetFrameworkMonikerAttribute: Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output fi les are up-to-date with respect to the input files. CoreCompile: Skipping target "CoreCompile" because all output files are up-to-date with resp ect to the input files. CopyFilesToOutputDirectory: Copying file from "obj\Debug\MyProject.Web.dll" to "bin\Debug\MyProject.Web .dll". MyProject2010.web - E:\code\dashboards\MyProject2010\MyProject2010.Web \bin\Debug\MyProject.Web.dll Copying file from "obj\Debug\MyProject.Web.pdb" to "bin\Debug\MyProject.Web .pdb". Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010 .web.csproj" (1) is building "E:\code\dashboards\MyProject2010\MyProject20 10.Client\MyProject2010.Client.csproj" (2) on node 1 (GetXapOutputFile target( s)). C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlight .Common.targets(104,9): error : The Silverlight 4 SDK is not installed. [E:\cod e\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Client.cspr oj] Done Building Project "E:\code\dashboards\MyProject2010\MyProject2010.Clie nt\MyProject2010.Client.csproj" (GetXapOutputFile target(s)) -- FAILED. Done Building Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\ MyProject2010.web.csproj" (default targets) -- FAILED. Build FAILED. "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010.web.csp roj" (default target) (1) - "E:\code\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Clie nt.csproj" (GetXapOutputFile target) (2) - (GetFrameworkPaths target) - C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlig ht.Common.targets(104,9): error : The Silverlight 4 SDK is not installed. [E:\c ode\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Client.cs proj] 0 Warning(s) 1 Error(s) Time Elapsed 00:00:00.39 I appreciate anyone's help. Thanks.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >