Search Results

Search found 2860 results on 115 pages for 'javaone feed'.

Page 104/115 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • What Counts for A DBA: Observant

    - by drsql
    When walking up to the building where I work, I can see CCTV cameras placed here and there for monitoring access to the building. We are required to wear authorization badges which could be checked at any time. Do we have enemies?  Of course! No one is 100% safe; even if your life is a fairy tale, there is always a witch with an apple waiting to snack you into a thousand years of slumber (or at least so I recollect from elementary school.) Even Little Bo Peep had to keep a wary lookout.    We nerdy types (or maybe it was just me?) generally learned on the school playground to keep an eye open for unprovoked attack from simpler, but more muscular souls, and take steps to avoid messy confrontations well in advance. After we’d apprehensively negotiated adulthood with varying degrees of success, these skills of watching for danger, and avoiding it,  translated quite well to the technical careers so many of us were destined for. And nowhere else is this talent for watching out for irrational malevolence so appropriate as in a career as a production DBA.   It isn’t always active malevolence that the DBA needs to watch out for, but the even scarier quirks of common humanity.  A large number of the issues that occur in the enterprise happen just randomly or even just one time ever in a spurious manner, like in the case where a person decided to download the entire MSDN library of software, cross join every non-indexed billion row table together, and simultaneously stream the HD feed of 5 different sporting events, making the network access slow while the corporate online sales just started. The decent DBA team, like the going, gets tough under such circumstances. They spring into action, checking all of the sources of active information, observes the issue is no longer happening now, figures that either it wasn’t the database’s fault and that the reboot of the whatever device on the network fixed the problem.  This sort of reactive support is good, and will be the initial reaction of even excellent DBAs, but it is not the end of the story if you really want to know what happened and avoid getting called again when it isn’t even your fault.   When fires start raging within the corporate software forest, the DBA’s instinct is to actively find a way to douse the flames and get back to having no one in the company have any idea who they are.  Even better for them is to find a way of killing a potential problem while the fires are small, long before they can be classified as raging. The observant DBA will have already been monitoring the server environment for months in advance.  Most troubles, such as disk space and security intrusions, can be predicted and dealt with by alerting systems, whereas other trouble can come out of the blue and requires a skill of observing ongoing conditions and noticing inexplicable changes that could signal an emerging problem.  You can’t automate the DBA, because the bankable skill of a DBA is in detecting the early signs of unexpected problems, and working out how to deal with them before anyone else notices them.    To achieve this, the DBA will check the situation as it is currently happening,  and in many cases is likely to have been the person who submitted the problem to the level 1 support person in the first place, just to let the support team know of impending issues (always well received, I tell you what!). Database and host computer settings, configurations, and even critical data might be profiled and captured for later comparisons. He’ll use Monitoring tools, built-in, commercial (Not to be too crassly commercial or anything, but there is one such tool is SQL Monitor) and lots of homebrew monitoring tools to monitor for problems and changes in the server environment.   You will know that you have it right when a support call comes in and you can look at your monitoring tools and quickly respond that “response time is well within the normal range, the query that supports the failing interface works perfectly and has actually only been called 67% as often as normal, so I am more than willing to help diagnose the problem, but it isn’t the database server’s fault and is probably a client or networking slowdown causing the interface to be used less frequently than normal.” And that is the best thing for any DBA to observe…

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • Part 4 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #16) What is it? Find out version information about Silverlight and which WebKit it is using by going to http://issilverlightinstalled.com/scriptverify/. Why do I care? I’ve had those users that its just easier to give them a site and say copy/paste the line that says User Agent in order to troubleshoot a Silverlight problem. I’ve also been debugging my own Silverlight applications and needed an easy way to determine if the plugin is disabled or not. How do I do it: Simply navigate to http://issilverlightinstalled.com/scriptverify/ and hit the Verify button. An example screenshot is located below: Results from Chrome 7 Results from Internet Explorer 8 (With Silverlight Disabled) Tip/Trick #17) What is it? Use Lambdas whenever you can. Why do I care?  It is my personal opinion that code is easier to read using Lambdas after you get past the syntax. How do I do it: For example: You may write code like the following: void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += new CheckAndDownloadUpdateCompletedEventHandler(Current_CheckAndDownloadUpdateCompleted); } void Current_CheckAndDownloadUpdateCompleted(object sender, CheckAndDownloadUpdateCompletedEventArgs e) { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } } To me this style forces me to look for the other Method to see what the code is actually doing. The style located below is much easier to read in my opinion and does the exact same thing. void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += (s, e) => { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } }; } Tip/Trick #18) What is it? Prevent development Web Service references from breaking when Visual Studio auto generates a new port number. Why do I care?  We have all been there, we are developing a Silverlight Application and all of a sudden our development web services break. We check and find out that the local port number that Visual Studio assigned has changed and now we need up to update all of our service references. We need a way to stop this. How do I do it: This can actually be prevented with just a few mouse click. Right click on your web solution and goto properties. Click the tab that says, Web. You just need to click the radio button and specify a port number. Now you won’t be bothered with that anymore. Tip/Trick #19) What is it? You can disable the Close Button a ChildWindow. Why do I care?  I wouldn’t blog about it if I hadn’t seen it. Devs trying to override keystrokes to prevent users from closing a Child Window. How do I do it: A property exist on the ChildWindow called “HasCloseButton”, you simply change that to false and your close button is gone. You can delete the “Cancel” button and add some logic to the OK button if you want the user to respond before proceeding. Tip/Trick #20) What is it? Cleanup your XAML. Why do I care?  By removing unneeded namespaces, not naming all of your controls and getting rid of designer markup you can improve code quality and readability. How do I do it: (This is a 3 in one tip) Remove unused Designer markup: 1) Have you ever wondered what the following code snippet does? xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" This code is telling the designer to do something special with this page in “Design mode” Specifically the width and the height of the page. When its running in the browser it will not use this information and it is actually ignored by the XAML parser. In other words, if you don’t need it then delete it. 2) If you are not using a namespace then remove it. In the code sample below, I am using Resharper which will tell me the ones that I’m not using by the grayed out line below. If you don’t have resharper you can look in your XAML and manually remove the unneeded namespaces. 3) Don’t name an control unless you actually need to refer to it in procedural code. If you name a control you will take a slight performance hit that is totally unnecessary if its not being called. <TextBlock Height="23" Text="TextBlock" />   That is the end of the series. I hope that you enjoyed it and please check out Part 1 | Part 2 | Part 3 if your hungry for more.  Subscribe to my feed CodeProject

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • Deploying MVC2 application to IIS7.5 - Ninject asked to provide controllers for content files

    - by Rune Jacobsen
    I have an application that started life as an MVC (1.0) app in Visual Studio 2008 Sp1 with a bunch of Silverlight 3 projects as part of the site. Nothing fancy at all. Using Ninject for dependency injection (first version 2 beta, now the released version 2 with the MVC extensions). With the release of .Net 4.0, VS2010, MVC2 etc., we decided to move the application to the newest platform. The conversion wizard in VS2010 apparently took care of everything, with one exception - it didn't change references to mvc1 to now point to mvc2, so I had to do that manually. Of course, this makes me think about other MVC2 things that could be missing from my app, that would be there if I did File - New Project... But that is not the focus of this question. When I deploy this application to the IIS 7.5 server (running on Win2008 R2 x64), the application as such works. However, images, scripts and other static content doesn't seem to exist. Of course they are there on disk on the server, but they don't show up in the client web browser. I am fairly new to IIS, so the only trick I knew is to try to open the web page in a browser on the server, as that could give me more information. And here, finally, we meet our enemy. If I try to go directly to the URL of one of the images (http://server/Content/someimage.jpg for instance), I get the following error in the browser: The IControllerFactory 'Ninject.Web.Mvc.NinjectControllerFactory' did not return a controller for a controller named 'Content'. Aha. The web server tries to feed this request to MVC, who with its' default routing setup assumes Content to be a controller, and fails. How can I get it to treat Content/ and Scripts/ (among others) as non-controllers and just pass through the static content? This of course works with Cassini on my developer machine, but as soon as I deploy, this problem hits. I am using the last version of Ninject MVC 2 where the IoC tool should pass missing controllers to the base controller factory, but this has apparently not helped. I have also tried to add ignore routes for Content etc., but this apparently has no effect either. I am not even sure I am addressing the problem on the right level. Does anyone know where to look to get this app going? I have full control of the web server so I can more or less do whatever I want to it, as long as it starts working. Thanks!

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • WPF RowDetailsTemplate width issue

    - by Ed Courtenay
    Apologies if this is a dupe, but I can't seem to find a rational solution for what must be a fairly simple issue. <Window x:Class="FeedTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <XmlNamespaceMappingCollection x:Key="map"> <XmlNamespaceMapping Prefix="media" Uri="http://search.yahoo.com/mrss/" /> </XmlNamespaceMappingCollection> <XmlDataProvider x:Key="newsFeed" XPath="//item[string-length(title)>0]" Source="http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/uk/rss.xml" /> <DataTemplate x:Key="rowDetailTemplate"> <Border BorderThickness="2"> <StackPanel Orientation="Horizontal"> <Image Source="{Binding XPath=media:thumbnail/@url}" Width="66" Height="49" /> <StackPanel Orientation="Vertical" Margin="5"> <TextBlock Text="{Binding XPath=description}" TextWrapping="Wrap" /> </StackPanel> </StackPanel> </Border> </DataTemplate> <Style TargetType="{x:Type DataGrid}"> <Setter Property="GridLinesVisibility" Value="None" /> </Style> </Window.Resources> <Grid Binding.XmlNamespaceManager="{StaticResource map}"> <DataGrid AutoGenerateColumns="False" ItemsSource="{Binding Source={StaticResource newsFeed}}" RowDetailsVisibilityMode="VisibleWhenSelected" RowDetailsTemplate="{StaticResource rowDetailTemplate}"> <DataGrid.Columns> <DataGridTextColumn Header="Title" Binding="{Binding XPath=title}" MinWidth="150" Width="*" /> </DataGrid.Columns> </DataGrid> </Grid> The attached XAML gets a news feed, and displays the title of each item in a DataGrid. Selecting an item shows the RowDetailsTemplate which is where my problem lies - why does the RowDetailsTemplate expand beyond the width of the containing DataGrid (thus forcing a horizontal scrollbar), and more importantly, how do I stop it doing this? Many thanks.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Core Data NSPredicate for relationships.

    - by Mugunth Kumar
    My object graph is simple. I've a feedentry object that stores info about RSS feeds and a relationship called Tag that links to "TagValues" object. Both the relation (to and inverse) are to-many. i.e, a feed can have multiple tags and a tag can be associated to multiple feeds. I referred to http://stackoverflow.com/questions/844162/how-to-do-core-data-queries-through-a-relationship and created a NSFetchRequest. But when fetch data, I get an exception stating, NSInvalidArgumentException unimplemented SQL generation for predicate What should I do? I'm a newbie to core data :( I know I've done something terribly wrong... Please help... Thanks -- NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"FeedEntry" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"authorname" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSEntityDescription *tagEntity = [NSEntityDescription entityForName:@"TagValues" inManagedObjectContext:self.managedObjectContext]; NSPredicate *tagPredicate = [NSPredicate predicateWithFormat:@"tagName LIKE[c] 'nyt'"]; NSFetchRequest *tagRequest = [[NSFetchRequest alloc] init]; [tagRequest setEntity:tagEntity]; [tagRequest setPredicate:tagPredicate]; NSError *error = nil; NSArray* predicates = [self.managedObjectContext executeFetchRequest:tagRequest error:&error]; TagValues *tv = (TagValues*) [predicates objectAtIndex:0]; NSLog(tv.tagName); // it is nyt here... NSPredicate *predicate = [NSPredicate predicateWithFormat:@"tag IN %@", predicates]; [fetchRequest setPredicate:predicate]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; --

    Read the article

  • ContentType Issue -- Human is an idiot - Can't figure out how to tie the original model to a Content

    - by bmelton
    Originally started here: http://stackoverflow.com/questions/2650181/django-in-query-as-a-string-result-invalid-literal-for-int-with-base-10 I have a number of apps within my site, currently working with a simple "Blog" app. I have developed a 'Favorite' app, easily enough, that leverages the ContentType framework in Django to allow me to have a 'favorite' of any type... trying to go the other way, however, I don't know what I'm doing, and can't find any examples for. I'll start off with the favorite model: favorite/models.py from django.db import models from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic from django.contrib.auth.models import User class Favorite(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() user = models.ForeignKey(User) content_object = generic.GenericForeignKey() class Admin: list_display = ('key', 'id', 'user') class Meta: unique_together = ("content_type", "object_id", "user") Now, that allows me to loop through the favorites (on a user's "favorites" page, for example) and get the associated blog objects via {{ favorite.content_object.title }}. What I want now, and can't figure out, is what I need to do to the blog model to allow me to have some tether to the favorite (so when it is displayed in a list it can be highlighted, for example). Here is the blog model: blog/models.py from django.db import models from django.db.models import permalink from django.template.defaultfilters import slugify from category.models import Category from section.models import Section from favorite.models import Favorite from django.contrib.auth.models import User from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic class Blog(models.Model): title = models.CharField(max_length=200, unique=True) slug = models.SlugField(max_length=140, editable=False) author = models.ForeignKey(User) homepage = models.URLField() feed = models.URLField() description = models.TextField() page_views = models.IntegerField(null=True, blank=True, default=0 ) created_on = models.DateTimeField(auto_now_add = True) updated_on = models.DateTimeField(auto_now = True) def __unicode__(self): return self.title @models.permalink def get_absolute_url(self): return ('blog.views.show', [str(self.slug)]) def save(self, *args, **kwargs): if not self.slug: slug = slugify(self.title) duplicate_count = Blog.objects.filter(slug__startswith = slug).count() if duplicate_count: slug = slug + str(duplicate_count) self.slug = slug super(Blog, self).save(*args, **kwargs) class Entry(models.Model): blog = models.ForeignKey('Blog') title = models.CharField(max_length=200) slug = models.SlugField(max_length=140, editable=False) description = models.TextField() url = models.URLField(unique=True) image = models.URLField(blank=True, null=True) created_on = models.DateTimeField(auto_now_add = True) def __unicode__(self): return self.title def save(self, *args, **kwargs): if not self.slug: slug = slugify(self.title) duplicate_count = Entry.objects.filter(slug__startswith = slug).count() if duplicate_count: slug = slug + str(duplicate_count) self.slug = slug super(Entry, self).save(*args, **kwargs) class Meta: verbose_name = "Entry" verbose_name_plural = "Entries" Any guidance?

    Read the article

  • realtime logging

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

  • Can't checkin to Facebook Places by post to api?

    - by MarcusJoe
    Hey everybody, I am trying to build an app where I let my registered user be able to check in to places on Facebook Places. I however for some reason can't seem to make this work. I assumed this is possible with the Api as write functionality has been added to it, but I couldn't find an clear explanation on the web. this is what I currently have, after I have asked the user for permission to publish checkins and for user_checkins. <?php require("src/facebook.php"); $facebook = new Facebook(array( 'appId' => 'xxxxxxxxx', 'secret' => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'cookie' => true )); # see if active session $session = $facebook->getSession(); if(!empty($session)) { try{ $uid = $facebook->getUser(); $api_call = array( 'method' => 'users.hasAppPermission', 'uid' => $uid, 'ext_perm' => 'publish_checkins' ); $can_post = $facebook->api($api_call); if($can_post){ $facebook->api('/'.$uid.'/checkins', 'POST', array( 'access_token' => $facebook->getAccessToken(), 'place' => 'place_id', 'message' =>'I went to placename today', 'picture' => 'http://www.place.com/logo.jpg', 'coordinates' => array( 'latitude' => 'lattiude', 'longitude' => 'lattitude', 'tags' => $uid, ) ) ); echo 'You were checked in'; } else { die('Permissions required!'); } } catch (Exception $e){} } else { # There's no active session,generate one $login_url = $facebook->getLoginUrl(); header("Location: ".$login_url); } ?> The code works when I change it 'checkins' to 'feed'. Is there something wrong with my code or am I trying to do somethign that isn't possible (or do it the wrong way). Any help will be greatly appreciated as I already spent quite a significant amount of time trying to fix this, but I just can't seem to make it work. Best regards, Marcus Joe

    Read the article

  • flickr phpflickr api

    - by sea_1987
    Overview I am trying to get a photo feed on to my site using Flickr's api and the phpflickr library. I can successfully get the photoset on to my site, but it shows all the photos from every photoset, what I was hoping to achieve was to show the primary photo from each photoset, and then if the user clicked on the image it would show the full photoset in a lightbox/shadowbox. My Code <div id="images" class="tabnav"> <ul class="items"> <?php $count = 1; ?> <?php foreach ($photosets['photoset'] as $ph_set): ?> <?php $parentID = $ph_set['parent']; ?> <?php $photoset_id = $ph_set['id']; $photos = $f->photosets_getPhotos($photoset_id); foreach ($photos['photoset']['photo'] as $photo): ?> <li> <a rel="shadowbox['<?=$count;?>']" href="<?= $f->buildPhotoURL($photo, 'medium') ?>" title="<?= $photo['title'] ?>"> <img src="<?= $f->buildPhotoURL($photo, 'rectangle') ?>" alt="<?= $photo['title'] ?>" width="210" height="160" title="<?= $photo['title'] ?>" /> <h3><?=$ph_set['title']?></h3> <p><?=$ph_set['description'];?></p> </a> </li> <?php endforeach; ?> <?php $count++; ?> <?php endforeach; ?> </ul> </div> Another Attempt I have also tried calling the getPhotos function differently, instead of sending it without any parameters I sent it with parameters $photos = $f->photosets_getPhotos($photoset_id, NULL, NULL, 1, NULL); The above code stopped the showing all the photos from each photoset and started showing just the primary image, but it also stopped making the rest of the photos accesible to me. Is there something I can do to make this work? I am totally out iof ideas. Regards and thanks

    Read the article

  • echo XML values

    - by danit
    Here is my XML: object(SimpleXMLElement)#1 (2) { ["@attributes"]=> array(1) { ["type"]=> string(5) "array" } ["feed"]=> array(3) { [0]=> object(SimpleXMLElement)#2 (6) { ["title"]=> string(34) "Twitter / Favorites from bob" ["id"]=> string(27) "tag:twitter.com,2007:Status" ["link"]=> array(2) { [0]=> object(SimpleXMLElement)#5 (1) { ["@attributes"]=> array(3) { ["type"]=> string(9) "text/html" ["href"]=> string(38) "http://twitter.com/bob/favorites" ["rel"]=> string(9) "alternate" } } [1]=> object(SimpleXMLElement)#6 (1) { ["@attributes"]=> array(3) { ["type"]=> string(20) "application/atom+xml" ["href"]=> string(40) "http://twitter.com/favorites.atom?page=1" ["rel"]=> string(4) "self" } } } ["updated"]=> string(25) "2010-04-01T10:44:19+00:00" ["subtitle"]=> string(56) "Twitter updates favorited by Dan Humpherson / MoodleDan." ["entry"]=> array(20) { [0]=> object(SimpleXMLElement)#7 (7) { ["title"]=> string(104) "smashingmag: Sikuli: a visual technology to search and automate GUIs using images - http://bit.ly/6ArwzP" ["content"]=> string(104) "smashingmag: Sikuli: a visual technology to search and automate GUIs using images - http://bit.ly/6ArwzP" ["id"]=> string(72) "tag:twitter.com,2007:http://twitter.com/smashingmag/statuses/11389386545" ["published"]=> string(25) "2010-03-31T22:06:41+00:00" ["updated"]=> string(25) "2010-03-31T22:06:41+00:00" ["link"]=> array(2) { [0]=> object(SimpleXMLElement)#27 (1) { ["@attributes"]=> array(3) { ["type"]=> string(9) "text/html" ["href"]=> string(51) "http://twitter.com/smashingmag/statuses/11389386545" ["rel"]=> string(9) "alternate" } } [1]=> object(SimpleXMLElement)#28 (1) { ["@attributes"]=> array(3) { ["type"]=> string(10) "image/jpeg" ["href"]=> string(64) "http://a3.twimg.com/profile_images/572829723/original_normal.jpg" ["rel"]=> string(5) "image" } } } ["author"]=> object(SimpleXMLElement)#29 (2) { ["name"]=> string(17) "Smashing Magazine" ["uri"]=> string(31) "http://www.smashingmagazine.com" } } For the life of me I cannot get my code to work, all I want to do is echo the entry->content string but no matter what I try I get nothing. Can anyone assist an inept PHP n00b?

    Read the article

  • Creating an XML sitemap with PHP

    - by iMaster
    I'm trying to create a sitemap that will automatically update. I've done something similiar with my RSS feed, but this sitemap refuses to work. You can view it live at http://designdeluge.com/sitemap.xml I think the main problem is that its not recognizing the PHP code. Here's the full source: <?php header('Content-type: text/xml'); ?> <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.google.com/schemas/sitemap/0.84" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.google.com/schemas/sitemap/0.84 http://www.google.com/schemas/sitemap/0.84/sitemap.xsd"> <url> <loc>http://www.designdeluge.com/</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>1.00</priority> </url> <url> <loc>http://www.designdeluge.com/archives</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>0.5</priority> </url> <url> <loc>http://www.designdeluge.com/about.php</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>0.5</priority> </url> <?php include 'connection.php'; $entries = mysql_query("SELECT * FROM Entries ORDER BY timestamp DESC"); while($row = mysql_fetch_array($entries)) { $title = stripslashes($row['title']); $date = date("Y-m-d", strtotime($row['timestamp'])); ?> <url> <loc>http://www.designdeluge.com/<?php echo $row['title']; ?></loc> <lastmod><?php echo $date; ?></lastmod> <changefreq>none</changefreq> <priority>0.5</priority> </url> <?php } ?> </urlset> The problem is that the dynamic URL's (e.g. the ones pulled from the DB) aren't being generated and the sitemap won't validate. Thanks!

    Read the article

  • Releasing an OLE IStorage file handle in C#

    - by Bernard Darnton
    I'm trying to embed a PDF file into a Word document using the OLE technique described here: http://blogs.msdn.com/brian_jones/archive/2009/07/21/embedding-any-file-type-like-pdf-in-an-open-xml-file.aspx I've tried to implement to C++ code provided in C# so that the whole project's in one place and am almost there except for one roadblock. When I try to feed the generated OLE object binary data into the Word document I get an IOException. IOException: The process cannot access the file 'C:\Wherever\Whatever.pdf.bin' because it is being used by another process. There is a file handle open the .bin file and I don't know how to get rid of it. I don't know a huge amount about COM - I'm winging it here - and I don't know where the file handle is or how to release it. Here's what my C#-ised code looks like. What am I missing? public void ExportOleFile(string oleOutputFileName, string emfOutputFileName) { OLE32.IStorage storage; var result = OLE32.StgCreateStorageEx( oleOutputFileName, OLE32.STGM.STGM_READWRITE | OLE32.STGM.STGM_SHARE_EXCLUSIVE | OLE32.STGM.STGM_CREATE | OLE32.STGM.STGM_TRANSACTED, OLE32.STGFMT.STGFMT_DOCFILE, 0, IntPtr.Zero, IntPtr.Zero, ref OLE32.IID_IStorage, out storage ); var CLSID_NULL = Guid.Empty; OLE32.IOleObject pOle; result = OLE32.OleCreateFromFile( ref CLSID_NULL, _inputFileName, ref OLE32.IID_IOleObject, OLE32.OLERENDER.OLERENDER_NONE, IntPtr.Zero, null, storage, out pOle ); result = OLE32.OleRun(pOle); IntPtr unknownFromOle = Marshal.GetIUnknownForObject(pOle); IntPtr unknownForDataObj; Marshal.QueryInterface(unknownFromOle, ref OLE32.IID_IDataObject, out unknownForDataObj); var pdo = Marshal.GetObjectForIUnknown(unknownForDataObj) as IDataObject; var fetc = new FORMATETC(); fetc.cfFormat = (short)OLE32.CLIPFORMAT.CF_ENHMETAFILE; fetc.dwAspect = DVASPECT.DVASPECT_CONTENT; fetc.lindex = -1; fetc.ptd = IntPtr.Zero; fetc.tymed = TYMED.TYMED_ENHMF; var stgm = new STGMEDIUM(); stgm.unionmember = IntPtr.Zero; stgm.tymed = TYMED.TYMED_ENHMF; pdo.GetData(ref fetc, out stgm); var hemf = GDI32.CopyEnhMetaFile(stgm.unionmember, emfOutputFileName); storage.Commit((int)OLE32.STGC.STGC_DEFAULT); pOle.Close(0); GDI32.DeleteEnhMetaFile(stgm.unionmember); GDI32.DeleteEnhMetaFile(hemf); }

    Read the article

  • Add rss xmlns namespace definition to a php simplexml document?

    - by talkingnews
    I'm trying to create an itunes-valid podcast feed using php5's simplexml: <?php $xml_string = <<<XML <?xml version="1.0" encoding="UTF-8"?> <channel> </channel> XML; $xml_generator = new SimpleXMLElement($xml_string); $tnsoundfile = $xml_generator->addChild('title', 'Main Title'); $tnsoundfile->addChild('itunes:author', "Author", ' '); $tnsoundfile->addChild('category', 'Audio Podcasts'); $tnsoundfile = $xml_generator->addChild('item'); $tnsoundfile->addChild('title', 'The track title'); $enclosure = $tnsoundfile->addChild('enclosure'); $enclosure->addAttribute('url', 'http://test.com'); $enclosure->addAttribute('length', 'filelength'); $enclosure->addAttribute('type', 'audio/mpeg'); $tnsoundfile->addChild('itunes:author', "Author", ' '); header("Content-Type: text/xml"); echo $xml_generator->asXML(); ?> It doesn't validate, because I've got to put the line: <rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"> as per http://www.apple.com/itunes/podcasts/specs.html. So the output SHOULD be: <?xml version="1.0" encoding="UTF-8"?> <rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"> <channel> etc. I've been over and over the manual and forums, just can't get it right. If I put, near the footer: header("Content-Type: text/xml"); echo '<rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">'; echo $xml_generator->asXML(); ?> Then it sort of looks right in firefox and it doesn't complain about undefined namespaces anymore, but feedvalidator complains that line 1, column 77: XML parsing error: :1:77: xml declaration not at start of external entity [help] because the document now starts: <rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"><?xml version="1.0" encoding="UTF-8"?> and not <?xml version="1.0" encoding="UTF-8"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"> Thank you in advance.

    Read the article

  • dynamic char array sizing

    - by droseman
    Hello, In my application, I have a char array defined which can take one of three options: "okay", "high", "low" which are then sent down a serial port to a remote device. I currently have the array sized to take the 4 character words plus carriage return and line feed, but when I have to send "low" I get a null character in the strings, which I am concerned would confuse the host terminal. array definition char mod1_status_char[6] = {'0','0','0','0','0','0'}; char mod2_status_char[6] = {'0','0','0','0','0','0'}; char mod3_status_char[6] = {'0','0','0','0','0','0'}; sample of switch case statement: void DCOKStatus(uint8_t *ptr_status) { uint8_t status = *ptr_status; switch (status) { case 0x00: strcpy(mod1_status_char, "okay"); strcpy(mod2_status_char, "okay"); strcpy(mod3_status_char, "okay"); break; case 0x10: strcpy(mod1_status_char, "okay"); strcpy(mod2_status_char, "okay"); strcpy(mod3_status_char, "low"); break; } This is the struct which makes the message string to send strcpy(MsgStatus_on.descriptor_msg, "$psu_"); MsgStatus_on.address01 = hex_addr[0]; MsgStatus_on.address02 = hex_addr[1]; MsgStatus_on.space01 = 0x20; strcpy(MsgStatus_on.cmdmsg01, "op_en op1_"); strcpy(MsgStatus_on.statusmsg01, mod1_status_char); MsgStatus_on.space02 = 0x20; strcpy(MsgStatus_on.cmdmsg02, "op2_"); strcpy(MsgStatus_on.statusmsg02, mod2_status_char); MsgStatus_on.space03 = 0x20; strcpy(MsgStatus_on.cmdmsg03, "op3_"); strcpy(MsgStatus_on.statusmsg03, mod3_status_char); MsgStatus_on.CR = 0x0D; MsgStatus_on.LF = 0x0A; and this sends the message void USARTWrite(char *object, uint32_t size) { GPIO_SetBits(GPIOB, GPIO_Pin_1); char *byte; for (byte = object; size--; ++byte) { USART_SendData(USART1,*byte); } Would anyone be able to suggest a good approach to dynamically size the array to one character shorter when I need to send "low"? Thanks

    Read the article

  • Can I retain a Google apps session token permanently for a specific user who logs into my google app

    - by Ali
    Hi guys, is it possible to retain upon authorization a single session token for a user who signs into my gogle application. CUrrently my application seems to every now and then require the user to authenticate into google apps. I think it has to do with session dying out or so. I have the following code: function getCurrentUrl() { global $_SERVER; $php_request_uri = htmlentities(substr($_SERVER['REQUEST_URI'], 0, strcspn($_SERVER['REQUEST_URI'], "\n\r")), ENT_QUOTES); if (isset($_SERVER['HTTPS']) && strtolower($_SERVER['HTTPS']) == 'on') { $protocol = 'https://'; } else { $protocol = 'http://'; } $host = $_SERVER['HTTP_HOST']; if ($_SERVER['SERVER_PORT'] != '' && (($protocol == 'http://' && $_SERVER['SERVER_PORT'] != '80') || ($protocol == 'https://' && $_SERVER['SERVER_PORT'] != '443'))) { $port = ':' . $_SERVER['SERVER_PORT']; } else { $port = ''; } return $protocol . $host . $port . $php_request_uri; } function getAuthSubUrl($n=false) { $next = $n?$n:getCurrentUrl(); $scope = 'http://docs.google.com/feeds/documents https://www.google.com/calendar/feeds/ https://spreadsheets.google.com/feeds/ https://www.google.com/m8/feeds/ https://mail.google.com/mail/feed/atom/'; $secure = false; $session = true; //echo Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session);; return Zend_Gdata_AuthSub::getAuthSubTokenUri($next, $scope, $secure, $session).(isset($_SESSION['domain'])?'&hd='.$_SESSION['domain']:''); } function _regenerate_token() { global $BASE_URL; if(!$_SESSION['token']) { if(isset($_GET['token'])): $_SESSION['token'] = Zend_Gdata_AuthSub::getAuthSubSessionToken($_GET['token']); return; else: _regenerate_sessions(); _redirect(getAuthSubUrl($BASE_URL . '/index.php?'.$_SERVER['QUERY_STRING'])); endif; } } _regenerate_token(); I know I'm doing it all wrong here and I don't know why :( I have a CONSUMER SECRET code but only use it whereever I need to access a google service. However something is wrong with my authentication as the user has to periodically 'grant access to my application' and reauthorise himself... help please

    Read the article

  • Binary socket and policy file in Flex

    - by Daniil
    Hi, I'm trying to evaluate whether Flex can access binary sockets. Seems that there's a class calles Socket (flex.net package). The requirement is that Flex will connect to a server serving binary data. It will then subscribe to data and receive the feed which it will interpret and display as a chart. I've never worked with Flex, my experience lies with Java, so everything is new to me. So I'm trying to quickly set something simple up. The Java server expects the following: DataInputStream in = ..... byte cmd = in.readByte(); int size = in.readByte(); byte[] buf = new byte[size]; in.readFully(buf); ... do some stuff and send binary data in something like out.writeByte(1); out.writeInt(10000); ... etc... Flex, needs to connect to a localhost:6666 do the handshake and read data. I got something like this: try { var socket:Socket = new Socket(); socket.connect('192.168.110.1', 9999); Alert.show('Connected.'); socket.writeByte(108); // 'l' socket.writeByte(115); // 's' socket.writeByte(4); socket.writeMultiByte('HHHH', 'ISO-8859-1'); socket.flush(); } catch (err:Error) { Alert.show(err.message + ": " + err.toString()); } The first thing is that Flex does a <policy-file-request/>. I've modified the server to respond with: <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="192.168.110.1" to-ports="*" /> </cross-domain-policy> After that - EOFException happens on the server and that's it. So the question is, am I approaching whole streaming data issue wrong when it comes to Flex? Am I sending the policy file wrong? Unfortunately, I can't seem to find a good solid example of how to do it. It seems to me that Flex can do binary Client-Server application, but I personally lack some basic knowledge when doing it. I'm using Flex 3.5 in IntelliJ IDEA IDE. Any help is appreciated. Thank you!

    Read the article

  • realtime diagnostics

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

  • Connected host failed to respond (internal NAT address)

    - by MostRandom
    I'm writing my first C# web application that connects to an XML based service. It requires that I present a certificate and feed the XML stream. It seems to authenticate properly but then it gives the following error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.1.10.4:3128 The funny thing is that I'm not on a proxy or anything like that. I'm connecting directly to the internet. At one point I we did use a proxy that with internal NAT address. So my question is: Does Visual Studio have some sort of default proxy setting that I need to change? This IP is no longer used for anything, so I know that I don't need to use any proxy authentication code. using System; using System.Data; using System.Configuration; using System.Collections; using System.Web; using System.Net; using System.Security.Cryptography.X509Certificates; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; namespace WebApplication1 { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Uri requestURI = new Uri("*site omitted*"); //Create the Request Object HttpWebRequest pageRequest = (HttpWebRequest)WebRequest.Create(requestURI); //After installing the cert on the server export a client cert to the working directory as Deluxe.cer string certFile = "*certificate omitted*"; X509Certificate cert = X509Certificate.CreateFromCertFile(certFile); //Pull in your Data, if it is from an external xml as below or create an xml string with variables if a dynamic post is required. string xmlPath = "*XML omitted*"; System.Xml.XmlDocument passXML = new System.Xml.XmlDocument(); passXML.Load(xmlPath); //XML String with the data needed to pass string postData = passXML.OuterXml; //Set the Request Object parameters pageRequest.ContentType = "application/x-www-form-urlencoded"; pageRequest.Method = "POST"; pageRequest.AllowWriteStreamBuffering = false; pageRequest.AllowAutoRedirect = false; pageRequest.ClientCertificates.Add(cert); postData = "xml_data=" + Server.UrlEncode(postData); pageRequest.ContentLength = postData.Length; //Create the Post Stream Object System.IO.StreamWriter postStream = new System.IO.StreamWriter(pageRequest.GetRequestStream()); //Write the data to the post stream postStream.Write(postData); postStream.Flush(); postStream.Close(); //Set the Response Object HttpWebResponse postResponse = (HttpWebResponse)pageRequest.GetResponse();

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >