Search Results

Search found 22992 results on 920 pages for 'custom pages'.

Page 267/920 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • How to Automate Checking for Stolen Content?

    - by Hisoka
    So I know about tools like Copyscape and Google Alerts.. great tools, but it's quite tedious for me to copy and paste an URL or phrase for every one of my pages in my sites. Is there any tool out there that monitors your website and emails you or alerts you whenever someone has stolen content from your site? The only service I know is CopySentry and honestly, it's too expensive for me since I got thousands of pages I want to monitor... Anyone else have this problem? or is it just me? Thanks for any help.

    Read the article

  • RIA Services and Authorization

    This post digs deeper into the Book Club application from the perspective of the authorization feature of RIA Services. You can check out more information about the application via its associated table of contents post. The post covers how the out-of-box authorization rules can be applied, how custom rules that can be implemented, how custom rules can use additional bits of information in their implementation, and how client-side UI can be customized to account for authorization. The sample application...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Legally, can I re reuse code for different customers?

    - by canice
    The company I work for develops custom factory automation applications for multiple customers. Even though each application is custom, they contain common code which is re-used across projects. One of the customers is now looking for the source code to their application, which has caused a major storm in the company. Management have decided that we can't give them the source to the shared components as they are used by other customers. I've been asked to modify the shared components 'enough' so as they aren't common. So my questions are: 1) Legally, should there be any problem with re-using common components for different customer? 2) If I really need to modify the common components, then how much is 'enough' ? (I know this sucks, but I either do this or hand in my notice). Oh yeah, and my company has no license in place with any of these customers.

    Read the article

  • TestRail 1.3 Test Management Software released

    Hello, Gurock Software just announced version 1.3 of its test management software TestRail. TestRail is a web-based test case management software that helps software development teams and QA departments to efficiently manage, track and organize their software testing efforts. TestRail 1.3 comes with various new features and improvements and introduces custom fields. Custom fields allow teams to customize TestRail for their needs and add new fields to TestRail's user interface. TestRail 1.3...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Does a large (hidden) submenu count towards site content in tems of determining page similarities?

    - by Name
    Basically, I have this site that recently lost a lot of traffic after I optimized the html, the exact reasons to which are uncertain. The graph of impressions (times a page appears on search listings) is continuously going down like an e^-x function. Because the content, previously occupying five pages of tables, now fits within a few paragraph tags, the menu now occupies about 80% of the live html code and I am starting to have doubts wherether this affects the "similar pages" factor that Google punishes. Questions: As far as I know, Google ignores invisible material and the submenus are only visible when hovered over. Has anything at all changed in this area? If I ajax in the submenus, leaving only the main eight menu items to load, will I be punished for "hiding" information? Is the idea worth testing or is it frankly retarded?

    Read the article

  • Will Google follow HTML refresh?

    - by yasar11732
    I want to move my current Tumblr blog to static HTML blog. Currently I am using a custom domain, I am planning on doing the move when Google sees domain name change. I am considering two options: Buying a hosting service Using GitHub pages Buying a hosting service would probably mean I am going to pay for lots of things that I don't need like PHP, MySQL, e-mail service etc. On the other hand, if I use GitHub pages, I can't use .htaccess file to make 301 redirects. I want to change my URL structure and this is important to me. I was wondering if I use: <meta http-equiv=refresh content="0; url=http://example.com/newurl" /> Would Google see it as 301 redirect, so that I won't use my search engine value?

    Read the article

  • In addition to Google's First Click Free, should you whitelist search engine bots past a paywall?

    - by tobek
    Our site has subscription-only pages - non-subscribed visitors see a snippet preview. As per Google's FCF requirements, your first 5 hits to a subscriber-only pages with .google. as the referrer, you see the full page. In addition to this, should we whitelist search engine bots so that they can index the full content? I assume this is not required for Google, which can use FCF to index our content, but what about other search engines? Is this considered cloaking? My gut says that whitelisting bots past the paywall is bad practice., but I wanted to confirm - any evidence or references would be amazing.

    Read the article

  • DirtyPageReminder

    Use a DirtyPageReminder to alert users for unsaved changes when they navigate away from the current pages or close the browsers.

    Read the article

  • .NET Declarative Security: Why is SecurityAction.Deny impossible to work with?

    - by rally25rs
    I've been messing with this for about a day and a half now sifting through .NET reflector and MSDN docs, and can't figure anything out... As it stands in the .NET framework, you can demand that the current Principal belong to a role to be able to execute a method by marking a method like this: [PrincipalPermission(SecurityAction.Demand, Role = "CanEdit")] public void Save() { ... } I am working with an existing security model that already has a "ReadOnly" role defined, so I need to do exactly the opposite of above... block the Save() method if a user is in the "ReadOnly" role. No problem, right? just flip the SecurityAction to .Deny: [PrincipalPermission(SecurityAction.Deny, Role = "ReadOnly")] public void Save() { ... } Well, it turns out that this does nothing at all. The method still runs fine. It seems that the PrincipalPermissionAttribute defines: public override IPermission CreatePermission() But when the attribute is set to SecurityAction.Deny, this method is never called, so no IPermission object is ever created. Does anyone know of a way to get .Deny to work? I've been trying to make a custom secutiry attribute, but even that doesn't work. I tried to get tricky and do: public class MyPermissionAttribute : CodeAccessSecurityAttribute { private SecurityAction securityAction; public MyPermissionAttribute(SecurityAction action) : base(SecurityAction.Demand) { if (action != SecurityAction.Demand && action != SecurityAction.Deny) throw new ArgumentException("Unsupported SecurityAction. Only Demand and Deny are supported."); this.securityAction = action; } public override IPermission CreatePermission() { // do something based on the SecurityAction... } } Notice my attribute constructor always passes SecurityAction.Demand, which is the one action that would work previously. However, even in this case, the CreatePermission() method is still only called when the attribute is set to .Demand, and not .Deny! Maybe the runtime is actually checking the attribute instead of the SecurityAction passed to the CodeAccessSecurityAttribute constructor? I'm not sure what else to try here... anyone have any ideas? You wouldn't think it would be that hard to deny method access based on a role, instead of only demanding it. It really disturbed me that the default PrincipalPermission appears from within an IDE like it would be just fine doing a .Deny, and there is like a 1-liner in the MSDN docs that hint that it won't work. You would think the PrincipalPermissionAttribute constructor would throw an exception immediately if anything other that .Demand is specified, since that could create a big security hole! I never would have realized that .Deny does nothing at all if I hadn't been unit testing! Again, all this stems from having to deal with an existing security model that has a "ReadOnly" role that needs to be denied access, instead of doing it the other way around, where I cna just grant access to a role. Thanks for any help! Quick followup: I can actually make my custom attribute work by doing this: public class MyPermissionAttribute : CodeAccessSecurityAttribute { public SecurityAction SecurityAction { get; set; } public MyPermissionAttribute(SecurityAction action) : base(action) { } public override IPermission CreatePermission() { switch(this.SecurityAction) { ... } // check Demand or Deny } } And decorating the method: [MyPermission(SecurityAction.Demand, SecurityAction = SecurityAction.Deny, Role = "ReadOnly")] public void Save() { ... } But that is terribly ugly, since I'm specifying both Demand and Deny in the same attribute. But it does work... Another interesting note: My custom class extends CodeAccessSecurityAttribute, which in turn only extends SecurityAttribute. If I cnage my custom class to directly extend SecurityAttribute, then nothing at all works. So it seems the runtime is definately looking for only CodeAccessSecurityAttribute instances in the metadata, and does something funny with the SecurityAction specified, even if a custom constructor overrides it.

    Read the article

  • Building a modular Website with Zend Framework: Am I on the right way?

    - by Oliver
    Hi, i´m a little bit confused by reading all this posts an tutorials about staring with Zend, because there a so many different ways to solve a problem. I only need a feedback about my code to know if iam on the right way: To simply get a (hard coded) Navigation for my side (depending on who is logged in) i build a Controller Plugin with a postDispatch method that holds following code: public function postDispatch(Zend_Controller_Request_Abstract $request) { $menu = new Menu(); //Render menu in menu.phtml $view = new Zend_View(); //NEW view -> add View Helper $prefix = 'My_View_Helper'; $dir = dirname(__FILE__).'/../../View/Helper/'; $view->addHelperPath($dir,$prefix); $view->setScriptPath('../application/default/views/scripts/menu'); $view->menu = $menu->getMenu(); $this->getResponse()->insert('menu', $view->render('menu.phtml')); } Is it right that i need to set the helper path once again? I did this in a Plugin Controller named ViewSetup. There i do some setup for the view like doctype, headlinks, helper paths...(This step is from the book: Zend Framework in Action) The Menu class which is initiated looks like this: class Menu { protected $_menu = array(); /** * Menu for notloggedin and logged in */ public function getMenu() { $auth = Zend_Auth::getInstance(); $view = new Zend_View(); //check if user is logged in if(!$auth->hasIdentity()) { $this->_menu = array( 'page1' => array( 'label' => 'page1', 'title' => 'page1', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page1')) ), 'page2' => array( 'label' => 'page2', 'title' => 'page2', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page2')) ), 'page3' => array( 'label' => 'page3', 'title' => 'page3', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page3')) ), 'page4' => array( 'label' => 'page4', 'title' => 'page4', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page4')) ), 'page5' => array( 'label' => 'page5', 'title' => 'page5', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page5')) ) ); } else { //user is vom type 'client' //.. } return $this->_menu; } } Here´s my view script: <ul id="mainmenu"> <?php echo $this->partialLoop('menuItem.phtml',$this->menu) ?> </ul> This is working so far. My question is: is it usual to do it this way, is there anything to improve? I´m new to Zend and in the web are many deprecated tutorials which often is not obvious. Even the book is already deprecated where the autoloader is mentioned. Many thanks in advance

    Read the article

  • Dot Net Nuke module works in "Edit" mode but not for "View": cache problem?

    - by Godeke
    I have a DNN task that simply runs some Javascript to compute a price based on a few input fields. This module works fine on our production site, but we had a company do a skin for us to improve the look of the site and the module fails under this new system. (DNN 05.06.00 (459) although it was 5.5 prior... I updated in a futile hope that it was a bug in the old revision.) What is incredibly odd about this is that the module works fine when I'm logged in to DNN and using the Edit mode as an administrator. In this case the small snippet of JavaScript loads fine and filling the fields results in a price. On the other hand it I click "View" (or more importantly, if I'm not logged in at all) the page loads a cached copy. Even odder, I have found the cache files in \Portals\2\Cache\Pages are generated and then only the cached data is being used. When the cached copy is loaded, the JavaScript doesn't appear (it is normally created via a Page.ClientScript.RegisterClientScriptBlock(). Additionally, the button which posts the data to the server doesn't execute any of the server side code (confirmed with a debugger) but instead just reloads the cached copy. If I manually delete the files in \Portals\2\Cache\Pages then everything works properly, but I have to do so after every page load: failing to do so simply loads the page as it was last generated repeatedly. Resetting the application (either via the UI or editing web.config) doesn't change this and clearing the cache from the Host Settings page doesn't actually clear these cached pages. I'm guessing that Edit mode bypasses the cache in some way, but I have gone as far as turning off all caching on the site (which is horrible for performance) and the cached version is still loaded. Has anyone seen anything like this? Shouldn't clearing the cache clear the files (I'm using the File provider for caching)? Shouldn't even a cached page go back to the server if the user posts back? EDIT: I should point out that permissions don't appear to be a problem on the cache directory... other pages cached output are deleted from this folder, just this page has this issue. EDIT 2: Clarifying some settings and conditions which I didn't provide. First, this module works fine in production under DNN 5.6.0. In our test environment with the consulting company's changes it fails (the changes are skin and page layout only in theory: the module source itself verifies as unchanged). All cache settings and the like have been verified the same between the two and we only resorted to setting the module cache to 0 and -1 (and disabling the test site's cache entirely) when we couldn't find another cause for the problem. I have watched the cache work correctly on many other pages in test: there is something about this page that is causing the problem. We have punted and are creating an installable skin based on the consultant's work as I suspect they have somehow corrupted the DNN install (database side I think).

    Read the article

  • jQuery embedding youtube IE issue

    - by webmonkey237
    I have been working on a custom image slider featured here: JQuery $(function(){ $('.cont:gt(0)').hide(); $("#parent").on("mouseenter", ".extraContent div", function(){ var ind = $(this).index(); $("#parent").find(".cont").stop().fadeTo(600,0,function(){ $('#parent').find('.cont').eq(ind).stop().fadeTo(300,1); }); }); $('#parent .extraContent').on('click',function(){ window.location=$(this).find("a").attr("href"); return false; }); }); CSS ?#parent { width:400px; margin:auto} .mainContent { width:430px; height:300px; border:1px solid #000;padding:5px; } .extraContent {overflow:auto; width:450px;} .extraContent div{float:left; width:90px; height:90px; border:1px solid #00F; margin:5px; padding:5px } .extraContent div:hover { border:1px solid #0F0;cursor:pointer } .cont{ position:absolute; } HTML <div id="parent"> <div class="mainContent"> <div class="cont"> Content 1....</div> <div class="cont"> Content 2....</div> <div class="cont">Content 3...<br /><iframe width="267" height="200" src="http://www.youtube.com/embed/6tlQn7iePV4?rel=0" frameborder="0" allowfullscreen></iframe></div> <div class="cont"> Content 4....</div> </div> <div class="extraContent"> <div><p>1 Custom content here <br /> <a href="">Some link</a></p></div> <div><p>2 Custom content here <br /> <a href="">Some link</a></p></div> <div><p>3 Custom content here <br /> <a href="">Some link</a></p></div> <div><p>4 Custom content here <br /> <a href="">Some link</a></p></div> </div> </div>? My problem is if I embed YouTube video straight from the site using there iframe it transitions fine in Chrome but Firefox & IE just display the video straight away and each slide/div appears under the video, is this a known problem and doesn't anyone know a way I can get IE & FF to behave. p.s. because this is going to be in a content management system the only way the user can embed the video is using the default code from youtube. FIDDLE HERE

    Read the article

  • Vista Power Management GPO

    - by Matt
    Hi, I've created a loopback GPO that has several settings (both computer and user) including a Custom User Interface (Access 2007 Application) and Power Management (has the computer sleep after being idle for 2 min). I'm also filtering so that this policy does not apply to "Admins" - only to "Users". The problem I'm having is when the "Users" login the Power Management settings don’t work, but they do for "Admins". For testing I'm allowing the "Users" to launch Task Manager and use the Run line, so I'll run Explorer and look at Power Management and it shows the settings from my GPO. So I created a test OU with copies of the aforementioned GPO, but removed the Custom User Interface and found the Power Management settings do work for both the "Users" and "Admins". When I add the Custom UI the Power Management settings break for the "User" but continue to work for "Admins". Do the Power Management options need to have User Interface be "Explorer.exe"? Is this a bug or am I doing this the wrong way? BTW the tablets are using Vista SP2. Any insight or advice would be greatly appreciated. Thanks, Matt

    Read the article

  • Crazy problem with Nginx, PHP5-FPM on Ubuntu

    - by Emmanuel
    I've been trying to get a domain from shared hosting to my new VPS. Everything was working just 100% fine, and then all of a sudden rewrites stopped working, pictures that should work started returning 404s. I've got no idea why, but for some reason on my site: http://www.onlythebible.com/ only the home page works, all the other pages depend on rewrites which were working perfectly fine at one stage, but all of a sudden stopped working. Some of the pictures like this url: http://www.onlythebible.com/bgsPreview/Matthew-8.10.jpg which doesn't use a rewrite throws a 404? I almost certain it was nothing to do with the nginx configuration. I've got suspicions that it could be something to do with php5-fpm? The funny thing is, all of a sudden it started working again. And then an hour or so later it broke again and has now gone back to only displaying the home-page and all of the links (and some of the pictures) are just showing 404s. Does anyone have an idea of what the problem might be? I'm pretty new to the whole Linux VPS thing, but this just seems very strange. *edit Here's a line from the error log which might shed some light on the problem: 2011/02/06 03:04:59 [error] 2873#0: *220 open() "/usr/local/nginx/html/bgsPreview/Matthew-8.10.jpg" failed (2: No such file or directory), client: 114.77.115.211, server: onlythebible.com, request: "GET /bgsPreview/Matthew-8.10.jpg HTTP/1.1", host: "www.onlythebible.com", referrer: "http://www.onlythebible.com/" I wonder why it's trying to find the file in /usr/local/nginx/html instead of the proper root which is /var/www/ etc... Oh, and for some reason it's just started working again... for how long I don't know. Another thing that was a bit weird, is that the pages on my website are pulled from a database. But when I edited the database, the pages didn't change... It's almost like they've been cached or something.

    Read the article

  • how to enable iis 7 dynamic content compression?

    - by davidcl
    I've turned on dynamic content compression in IIS 7, but Fiddler is showing that my dynamic pages are still being served without content-encoding: gzip. Static content compression is working fine on the same servers. Not sure if it matters but most of the dynamic pages are coldfusion pages and we're also using the IIS URL rewriting module. This is from my applicationhost.config. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> ... <urlCompression doDynamicCompression="true" /> Here's a sample request: GET / HTTP/1.1 Host: web5.example.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive and response header: HTTP/1.1 200 OK Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 Server: Microsoft-IIS/7.0 ... Date: Mon, 22 Feb 2010 20:59:36 GMT

    Read the article

  • Web Deploy 3.0 Installation Fails

    - by jkarpilo
    I am having difficulty installing Microsoft Web Deploy 3.0 to a Windows Server 2008 R2 box. I have tried installing with both the Web Platform Installer and the MSI package but installation fails while trying to execute the MSI custom action ExecuteRegisterUIModuleCA. This server is a VM and a member of a farm but shared config is disabled while I'm installing. Here's the point at which it fails in the MSI log (starting at line 1875): MSI (s) (80:FC) [15:29:01:358]: Executing op: ActionStart(Name=IISBeginTransactionCA,,) MSI (s) (80:FC) [15:29:01:374]: Executing op: CustomActionSchedule(Action=IISBeginTransactionCA,ActionType=3073,Source=BinaryData,Target=IISBeginTransactionCA,) MSI (s) (80:A8) [15:29:01:374]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6C6A.tmp, Entrypoint: IISBeginTransactionCA MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISRollbackTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISRollbackTransactionCA,ActionType=3329,Source=BinaryData,Target=IISRollbackTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISCommitTransactionCA,,) MSI (s) (80:FC) [15:29:01:436]: Executing op: CustomActionSchedule(Action=IISCommitTransactionCA,ActionType=3585,Source=BinaryData,Target=IISCommitTransactionCA,) MSI (s) (80:FC) [15:29:01:436]: Executing op: ActionStart(Name=IISExecuteCA,,) MSI (s) (80:FC) [15:29:01:452]: Executing op: CustomActionSchedule(Action=IISExecuteCA,ActionType=3073,Source=BinaryData,Target=IISExecuteCA,CustomActionData=1^3^21^WebDeployment_Current^154^Microsoft.Web.Deployment.UI.PackagingModuleProvider, Microsoft.Web.Deployment.UI.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^3^28^DelegationManagement_Current^171^Microsoft.Web.Management.Delegation.DelegationModuleProvider, Microsoft.Web.Management.Delegation.Server, Version=9.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35^1^1^0^^1^7^38^system.webServer/management/delegation^4^Deny^16^MachineToWebRoot^0^^3^yes^1^7^31^system.webServer/wdeploy/backup^4^Deny^20^MachineToApplication^0^^2^no^) MSI (s) (80:84) [15:29:01:452]: Invoking remote custom action. DLL: C:\Windows\Installer\MSI6CB9.tmp, Entrypoint: IISExecuteCA 1: IISCA IISExecuteCA : Begin CA Setup 1: IISCA IISExecuteCA : CA 'ExecuteRegisterUIModuleCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : CA 'IISExecuteCA' completed with return code hr=0x8007000d 1: IISCA IISExecuteCA : End CA Setup CustomAction IISExecuteCA returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 15:29:05: InstallFinalize. Return value 3. I can't seem to find any information regarding this particular issue; can someone help point me in the right direction?

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • How to prevent blocking http auth popups on firefox restart with many tabs open

    - by Glen S. Dalton
    I am using the latest firefox with tab mix plus and tabgoups manager. I have maybe 50 or 100 tabs oben in different tab groups. When I shutdown firefox and start it again all tabs and tab groups are perfectly rebuilt. But I have also many pages open that are behind a standard http auth, and these pages all request their usernames and passwords. So during startup firefox pops up all these pages' http auth windows. And they block everything else in firefox, they are like modal windows. (I am involved in website development and the beta versions are behind apache http auth.) I have to click many times the OK button in the popups, before I can do anything. All the usernames and passwords are already filled in. (And the firefox taskbar entry blinks and the firefox window heading also blinks, and focus switches back and foth, which also annoys me. And sometimes the popups do not react to my clicks, because firefox is maybe just switching focus somewhere else. This is the worst.) I want a plugin or some way to skip those popups. There are some plugins I tried some time ago, but they did not do what I need, because they require a mouse click for each login, which is no improvement over the situation like it already is. This is not about password storage (because firefox already stores them). But of course, if some password storing plugin could heal this it would be great.

    Read the article

  • BizTalk configuration broken following WCF hotfix installation

    - by Sir Crispalot
    I usually post over on StackOverflow, but thought this was probably better suited to ServerFault. Please migrate if I'm wrong! I am developing a WCF service and a BizTalk application on my workstation at the moment. As part of the WCF service, I had to install hotfix 971493 from Microsoft which updates some core WCF assemblies. Following installation of that hotfix, I am now experiencing severe issues in my existing BizTalk application. When I attempt to configure the properties of an existing WCF-Custom receive location, I get this error: Error loading properties (System.IO.FileLoadException) The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) If I click OK (the same error repeats four times) I eventually see the WCF-Custom properties dialog. However if I click on the various tabs, I continue to receive errors: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) (Microsoft.BizTalk.Adapter.Wcf.Admin) The WCF-Custom receive location was working yesterday, and I installed the hotfix this morning. I'm guessing these two are related, and that BizTalk somehow has a reference to the old WCF assemblies. Does anyone know how I can fix this?

    Read the article

  • Portable, battery-powered, wireless access point, ethernet adapter

    - by Jed
    I am in need of an adapter that will convert an ethernet port into a wireless access point. I have found a handful of devices, but I'm unable to find a device that is battery powered. Does a self-powered wireless access point even exist? The particular scenario that I will be using the device for is not your typical computer/PC scenario. For the curious, here's a bit of background on the problem I'm trying to solve: I make devices (controllers) that monitor water systems. Our controllers have a Webserver that serves out web pages so that users can configure the controller's settings. Typically, the user will use a cross-over cable to connect directly to the controller's ethernet port with their laptop to gain access to the controller's web pages. Now that tablets (devices that don't have an ethernet port - iPad, for example) are becoming more common, I need to find a device that will convert the controller's ethernet port into a wireless access point so that the user can connect to the controller's web pages via Wi-Fi or Bluetooth. It's worth noting that this wireless device that I'm looking for will NOT be permanently installed on the controller. It will be a portable device that the user will use on any of his controllers when he needs to make a connection to the controller. If you know of a device that will solve the scenario that I mention above, please share your info.

    Read the article

  • Reverse proxy apache to weblogic problem

    - by Zlatoroh
    Hello I have apache 2.2 server and welogic 11g running on web server. Apache is set for revers proxy on port 8080, weblogic serves two web pages and it's on port :7001 first page: localhost:7001/e-SPP/app second page: localhost:7001/e-sprejem/app I would like to access this two pages with apache like so: localhost:8080/e-SPP/app localhost:8080/e-sprejem/app Listen 8080 ServerName localhost:8080 <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPreserveHost On RewriteEngine On <Location /e-SPP/app> ProxyPass localhost:7001/e-SPP/app ProxyPassReverse localhost:7001/e-SPP/app </Location> <Location /e-sprejem/app> ProxyPass localhost:7001/e-sprejem/app ProxyPassReverse localhost:7001/e-sprejem/app </Location> This configuration opens my pages bust it's black anw white because CSS and JS aren't loaded! Path to the css over proxy looks like this : localhost:8080/e-SPP/css/style.css which doesn't open the CSS if I change the port to 7001 the it works !!! localhost:7001/e-SPP/css/style.css What should I do that CSS and JS are loaded? Interesting is favicon which is being loaded http://localhost:8080/e-SPP/images/new/favicon.gif Thanks for your help!

    Read the article

  • Apache rewrite rules and special characters

    - by Massimo
    I have a server where some files have an actual %20 in their name (they are generated by an automated tool which handles spaces this way, and I can't do anything about this); this is not a space: it's "%" followed by "2" followed by "0". On this server, there is an Apache web server, and there are some web pages which links to those files, using their name in URLs like http://servername/file%20with%20a%20name%20like%20this.html; those pages are also generated by the same tool, so I (again!) can't do anything about that. A full search-and-replace on all files, pages and URLs is out of question here. The problem: when Apache gets called with an URL like the one above, it (correctly) translates the "%20"s into spaces, and then of course it can't find the files, because they don't have actuale spaces in their names. How can I solve this? I discovered than by using an URL like http://servername/file%2520name.html it works nicely, because then Apache translates "%25" into a "%" sign, and thus the correct filename gets built. I tried using an Apache rewrite rule, and I can succesfully replace spaces with hypens with a syntax like this: RewriteRule (.*)\ (.*) $1-$2 The problem: when I try to replace them with a "%2520" sequence, this just doesn't happen. If I use RewriteRule (.*)\ (.*) $1%2520$2 then the resulting URL is http://servername/file520name.html; I've tried "%25" too, but then I only get a "5"; it just looks like the initial "%2" gets somewhat discarded. The questions: How can I build such a regexp to replace spaces with "%2520"? Is this the only way I can deal with this issue (other than a full search-and-replace which, as I said, can't be done), or do you have any better idea?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >