Search Results

Search found 4705 results on 189 pages for 'permission denied'.

Page 168/189 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Can i use the open source rich text editor in my site ? what does this GNU do with that ?

    - by Shyju
    I want to use one rich text editor to one of my ASP page.I searched in internet and found that there are lot of open source items available like TinyMCE,FCK Editror,nice edit etc.. Can i put the same samples in my website.There is a GNU license associated with it.Can somebody interpret it for me to answer these questions 1 . Can i use it in my website without getting a permission from anyone ? 2 . Do i need maintain the same files always ? Can i make some customization and use it ? Customization is only the CSS changes its going to be. 3 . Do i need to put all the files as it is of the package which downloaded.because my download has samples which supports all languages. Do i need to put all those folders too From GNU license " if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights."

    Read the article

  • Forms/AD Authentication with Sharepoint

    - by David Lively
    All, I'm configuring Sharepoint to use forms authentication with LDAP/Active Directory. I'm new to Sharepoint, so if this is obvious, please point me in the right direction. Whenever I attempt to log in with a bad account or password, I get the very friendly (and correct) error message, The server could not sign you in. Make sure your user name and password are correct, and then try again. ... which implies that Sharepoint is able to communicate with AD. If I log in with a valid account, I get a page that says: (I added the grey bar to cover up the login name) Any suggestions? The account I'm logging in with is an administrator and has been granted full control in central administration. Also, interesting note: If I click the "sign in as a different user" link, and attempt to sign in using with the same credentials I just used, the site just redirects back to the login page, with no error or status message. If I then manually enter the site url, it again shows the "Error: Access Denied" page. Argh.

    Read the article

  • Thin permissions in etc folder (Ubuntu)

    - by Apollo
    I am working on a RoR server setup that uses Thin and Nginx. It works fine, but only if I manually add the folder /etc/thin and set the permissions to 777 in order to use the command below: thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production If I don't set it to 777, I get this error: me@UbuntuRails:/etc$ thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in initialize': Permission denied - /etc/thin/testapp.yml (Errno::EACCES) from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:inopen' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in config' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:187:inrun_command' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:152:in run!' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/bin/thin:6:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in load' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in eval' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in' I don't like to set this folder to a 777, sounds like a rubbish workaround. I run everything from an admin user account, not root. RVM runs from my admin user and gem only works in my admin as well. If I sudo that action, nothing happens because my root doesn't "know" thin. Which is the correct way to handle this? Thanks!

    Read the article

  • Mixed Mode C++ DLL function call failure when app launched from network share. Called from unmanage

    - by Steve
    Mixed-mode DLL called from native C application fails to load: An unhandled exception of type 'System.IO.FileLoadException' occurred in Unknown Module. Additional information: Could not load file or assembly 'XXSharePoint, Version=0.0.0.0, Culture=neutral, PublicKeyToken=e0fbc95fd73fff47' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) My environment is: Native C application calling a mixed mode C++ DLL, which then loads a C# DLL.. This works correctly when loaded from a local drive, but when launched from a network drive, it fails with the above messages. The call to LoadLibrary succeeds, as does the GetProcAddress. The load error happens when I call the function. I have digitally signed the C application, and I've performed "strong name" signing on the 2 DLLs. The PublickKeyToken in the message above does match the named DLL. I have also issued the CASPOLcommands on my client to grant FullTrust to that strong name keytoken. When that failed to work, I tried the CASPOL command to grant FullTrust to the URL of the network drive (including path to my application's directory); no change in results. I tried removing all dependencies, so that there was just the initial mixed-mode DLL... I replaced the bodies of all the functions with just a return of a "success" integer value. Results unchanged. Only when I changed it from Mixed Mode to Win32, and changed the Configuration Properties General Common Language Runtime Support from "Common Language Runtime Support" to "No Common Language Runtime Support" did calling the DLL produce the expected result (just returned the "success" integer return value).

    Read the article

  • Global.asax Event: Application_OnPostAuthenticateRequest

    - by Hemant Kothiyal
    Hi, I am using Application_OnPostAuthenticateRequest event in global.asax to get roles and permissions of authenticated user also i have made my custom principal class to get user detail and roles and permission. To get some information which remain same for that user. following are the code void Application_OnPostAuthenticateRequest(object sender, EventArgs e) { // Get a reference to the current User IPrincipal objIPrincipal = HttpContext.Current.User; // If we are dealing with an authenticated forms authentication request if ((objIPrincipal.Identity.IsAuthenticated) && (objIPrincipal.Identity.AuthenticationType == "Forms")) { CustomPrincipal objCustomPrincipal = new CustomPrincipal(); objCustomPrincipal = objCustomPrincipal.GetCustomPrincipalObject(objIPrincipal.Identity.Name); HttpContext.Current.User = objCustomPrincipal; CustomIdentity ci = (CustomIdentity)objCustomPrincipal.Identity; HttpContext.Current.Cache["CountryID"] = FatchMasterInfo.GetCountryID(ci.CultureId); HttpContext.Current.Cache["WeatherLocationID"] = FatchMasterInfo.GetWeatherLocationId(ci.UserId); Thread.CurrentPrincipal = objCustomPrincipal; } } My question is as following This event fires every time for every request. Hence for each request the code execute? My approach is right or not? Is it right to add HttpContext.Current.Cache in this event or we should move it on session start

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • EC2 SSH access from fedora

    - by Randika Rathugama
    I'm trying to connect to existing instance of EC2 with a new PEM. But I get this error when I try to connect. Here is what I did so far. I created the PEM on EC2 and saved it to ~/.ssh/my-fedora.pem and ran this command; is there anything else I should do? [randika@localhost ~]$ ssh -v -i ~/.ssh/my-fedora.pem [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0-fips-beta4 10 Nov 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com [xx-xx-xx-xx] port 22. debug1: Connection established. debug1: identity file /home/randika/.ssh/saberion-fedora.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.7 debug1: match: OpenSSH_4.7 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/randika/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: [email protected] debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Offering public key: [email protected] debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: Trying private key: /home/randika/.ssh/saberion-fedora.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,gssapi-with-mic debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic).

    Read the article

  • Unittesting aspect-oriented features.

    - by Tomas Brambora
    Hi, I'd like to know what would you propose as the best way to unit test aspect-oriented application features (well, perhaps that's not the best name, but it's the best I was able to come up with :-) ) such as logging or security? These things are sort of omni-present in the application, so how to test them properly? E.g. say that I'm writing a Cherrypy web server in Python. I can use a decorator to check whether the logged-in user has the permission to access a given page. But then I'd need to write a test for every page to see whether it works oK (or more like to see that I had not forgotten to check security perms for that page). This could maybe (emphasis on maybe) be bearable if logging and/or security were implemented during the web server "normal business implementation". However, security and logging usually tend to be added to the app as an afterthough (or maybe that's just my experience, I'm usually given a server and then asked to implement security model :-) ). Any thoughts on this are very welcome. I have currently 'solved' this issue by, well - not testing this at all. Thanks.

    Read the article

  • Issues with securing ELMAH file

    - by Kumar
    I am trying to allow the access to the elmah.axd file for only a perticular login and all others should be denied. I have followed Phil Haack's tutorial for securing the file. <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> <add verb="POST,GET,HEAD" path="admin/elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" /> </httpHandlers> <location path="admin"> <system.web> <authorization> <allow users="[email protected]"/> <deny users="*"/> </authorization> </system.web> </location> First I logged in as [email protected] and tried to access the http://localhost:58961/admin/elmah.axd file and I am rightly being redirected to the Login.aspx page. Next I logged in as [email protected] and was able to access the elmah file at http://localhost:58961/admin/elmah.axd. Now I logged in again as [email protected] and I was able to access the emlah file now. What is the reason for this behavior?

    Read the article

  • Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument]

    - by Sean Ochoa
    So, I'm using the XSLT plugin for JQuery, and here's my code: function AddPlotcardEventHandlers(){ // some code } function reportError(exception){ alert(exception.constructor.name + " Exception: " + ((exception.name) ? exception.name : "[unknown name]") + " - " + exception.message); } function GetPlotcards(){ $("#content").xslt("../xml/plotcards.xml","../xslt/plotcards.xsl", AddPlotcardEventHandlers,reportError); } Here's the modified jquery plugin. I say that its modified because I've added callbacks for success and error handling. /* * jquery.xslt.js * * Copyright (c) 2005-2008 Johann Burkard (<mailto:[email protected]>) * <http://eaio.com> * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE * USE OR OTHER DEALINGS IN THE SOFTWARE. * */ /** * jQuery client-side XSLT plugins. * * @author <a href="mailto:[email protected]">Johann Burkard</a> * @version $Id: jquery.xslt.js,v 1.10 2008/08/29 21:34:24 Johann Exp $ */ (function($) { $.fn.xslt = function() { return this; } var str = /^\s*</; if (document.recalc) { // IE 5+ $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var change = function() { try{ var c = 'complete'; if (xm.readyState == c && xs.readyState == c) { window.setTimeout(function() { target.html(xm.transformNode(xs.XMLDocument)); if (onSuccess) onSuccess(); }, 50); } }catch(exception){ if (onError) onError(exception); } }; var xm = document.createElement('xml'); xm.onreadystatechange = change; xm[str.test(xml) ? "innerHTML" : "src"] = xml; var xs = document.createElement('xml'); xs.onreadystatechange = change; xs[str.test(xslt) ? "innerHTML" : "src"] = xslt; $('body').append(xm).append(xs); return this; }catch(exception){ if (onError) onError(exception); } }; } else if (window.DOMParser != undefined && window.XMLHttpRequest != undefined && window.XSLTProcessor != undefined) { // Mozilla 0.9.4+, Opera 9+ var processor = new XSLTProcessor(); var support = false; if ($.isFunction(processor.transformDocument)) { support = window.XMLSerializer != undefined; } else { support = true; } if (support) { $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var transformed = false; var xm = { readyState: 4 }; var xs = { readyState: 4 }; var change = function() { try{ if (xm.readyState == 4 && xs.readyState == 4 && !transformed) { var processor = new XSLTProcessor(); if ($.isFunction(processor.transformDocument)) { // obsolete Mozilla interface resultDoc = document.implementation.createDocument("", "", null); processor.transformDocument(xm.responseXML, xs.responseXML, resultDoc, null); target.html(new XMLSerializer().serializeToString(resultDoc)); } else { processor.importStylesheet(xs.responseXML); resultDoc = processor.transformToFragment(xm.responseXML, document); target.empty().append(resultDoc); } transformed = true; if (onSuccess) onSuccess(); } }catch(exception){ if (onError) onError(exception); } }; if (str.test(xml)) { xm.responseXML = new DOMParser().parseFromString(xml, "text/xml"); } else { xm = $.ajax({ dataType: "xml", url: xml}); xm.onreadystatechange = change; } if (str.test(xslt)) { xs.responseXML = new DOMParser().parseFromString(xslt, "text/xml"); change(); } else { xs = $.ajax({ dataType: "xml", url: xslt}); xs.onreadystatechange = change; } }catch(exception){ if (onError) onError(exception); }finally{ return this; } }; } } })(jQuery); And, here's my error msg: Object Exception: [unknown name] - Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument] Here's the info on the browser that I'm using for testing (with firebug v1.5.4 add-on installed): Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Here's my XML: <?xml version="1.0" encoding="ISO-8859-1"?> <plotcardCollection sortby="order"> <plotcard order="2" id="1378"> <name><![CDATA[[placeholder for name of plotcard 1378]]]></name> <content><![CDATA[[placeholder for content of plotcard 1378]]]></content> <tagCollection> <tag id="3"><![CDATA[[placeholder for tag with id=3]]]></tag> <tag id="7"><![CDATA[[placeholder for tag with id=7]]]></tag> </tagCollection> </plotcard> <plotcard order="1" id="2156"> <name><![CDATA[[placeholder for name of plotcard 2156]]]></name> <content><![CDATA[[placeholder for content of plotcard 2156]]]></content> <tagCollection> <tag id="2"><![CDATA[[placeholder for tag with id=2]]]></tag> <tag id="9"><![CDATA[[placeholder for tag with id=9]]]></tag> </tagCollection> </plotcard> </plotcardCollection> Here's my XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/plotcardCollection"> <xsl:variable name="sortby" select="@sortby" /> <xsl:for-each select="plotcard"> <xsl:sort select="$sortby" data-type="number" order="ascending"/> <div> <!-- Start Plotcard --> <xsl:attribute name="class">Plotcard</xsl:attribute> <xsl:for-each select="@"> <xsl:value-of select="name()"/> <xsl:text>='</xsl:text> <xsl:if test="name() = 'id'"> <xsl:text>Plotcard-</xsl:text> </xsl:if> <xsl:value-of select="." /> <xsl:text>'</xsl:text> </xsl:for-each> <!-- Start Plotcard Name Section --> <div> <xsl:attribute name="class"> <xsl:text disable-output-escaping="yes">PlotcardName</xsl:text> </xsl:attribute> <xsl:value-of select="name/text()"/> </div> <!-- Start Plotcard Content Section --> <div> <xsl:attribute name="class"> <xsl:text disable-output-escaping="yes">PlotcardContent</xsl:text> </xsl:attribute> <xsl:value-of select="content/text()"/> </div> </div> </xsl:for-each> </xsl:template> </xsl:stylesheet> I'm really not sure what to do about this.... any thoughts?

    Read the article

  • Authenticating GTK app to run with root permissions

    - by Thomas Tempelmann
    I have a UI app (uses GTK) for Linux that requires to be run as root (it reads and writes /dev/disk*). Instead of requiring the user to open a root shell or use "sudo" manually every time when he launches my app, I wonder if the app can use some OS-provided API to ask the user to relaunch the app with root permissions. (Note: gtk app's can't use "setuid" mode, so that's not an option here.) The advantage here would be an easier workflow: The user could, from his default user account, double click my app from the desktop, and the app then would relaunch itself with root permission after been authenticated by the API/OS. I ask this because OS X offers exactly this: An app can ask the OS to launch an executable with root permissions - the OS (and not the app) then asks the user to input his credentials, verifies them and then launches the target as desired. I wonder if there's something similar for Linux (Ubuntu, e.g.) Update: The app is a remote operated disk repair tool for the unsavvy Linux user, and those Linux noobs won't have much understanding of using sudo or even changing their user's group memberships, especially if their disk just started acting up and they're freaking out. That's why I seek a solution that avoids technicalities like this.

    Read the article

  • tinymce and jquery

    - by tirso
    hi to all I am using tinymce and simple modal (jquery plugin). Initially, it was not worked (meaning I cannot type in the textarea) for the second or more opening modal dialog unless I refreshed the page. Now I changed my code and now I could type but the problem persist is the submit button doesn't work anymore. I tried to trace in firebug and I found some errors like this Thanks in advance Permission denied to get property XULElement.accessibleType [Break on this error] var tinymce={majorVersion:"3",minorVersi...hanged();return true}return false})})(); here is the revised code $(document).ready(function() { $('.basic').click(function (e) { e.preventDefault(); $('#basic-modal-content').modal({onShow: function (dialog) { tinyMCE.init({ // General options mode : "textareas", theme : "advanced", setup : function (ed) { ed.onKeyPress.add( function (ed, evt) { var y = tinyMCE.get('test').getContent(); $('#rem_char').html(100 - y.length); if (y.length == 100){ //ed.getWin().document.body.innerHTML = y.substring(0,100); alert("Your element has exceeded the 100 character limit. If you add anymore text it may be truncated when saved.") return; } } ); }, plugins : "safari,pagebreak,style,layer,table,save,advhr,advimage,advlink,emotions,iespell,inlinepopups,insertdatetime,preview,media,searchreplace,print,contextmenu,paste,directionality,fullscreen,noneditable,visualchars,nonbreaking,xhtmlxtras,template,wordcount", // Theme options theme_advanced_buttons1 : "bold,italic,underline,strikethrough,|,justifyleft,justifycenter,justifyright,justifyfull", theme_advanced_buttons2 : "fontselect,fontsizeselect,bullist,numlist,forecolor,backcolor", theme_advanced_buttons3 : "", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", dialog_type : "modal" }); return false; }}); }); $('.close').click(function (e) { e.preventDefault(); $.modal.close(); }); });

    Read the article

  • Shadowbox.js and dailymotion videos

    - by Greenie
    Hi there. I have a site set up with some thumbs and links to videos hosted on vimeo. I show them in a overlay using shadowbox.js. This works perfect. Now I want to add a video hosted on dailymotion, but it doesn't work. The working link to vimeo video: <a rel="shadowbox;height=636;width=956" href="http://vimeo.com/moogaloop.swf?clip_id=11377863&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00ADEF&fullscreen=1" class="player_text">thumbnail here</a> The non working link to dailymotion video: <a rel="shadowbox;height=636;width=956" href="http://www.dailymotion.com/swf/xd6g8y?related=0&autoplay=1" class="player_text">thumbnail here</a> When the href is pasted in a browser, both links work fine. Both are played in a swf as far as I can see. So I can't see why shadowbox won't show it. Unless its a permission problem on the dailymotion video. Anyone know what I'm doing wrong?

    Read the article

  • file upload working in one and not the other help

    - by rod
    Hi All, I have a web application which has 2 different versions deployed. 1 is a ASP.Net web forms version and the other is an ASP.Net MVC version I have a File Upload page which dynamically creates a directory folder for the target location of the file to be uploaded to. The application is using Forms Authentication for outside users and Integrated Windows Authentication for inside the network users. I have an issue where a user can upload a file in the ASP.Net Web Forms version fine. But when the same user tries to upload the file in the MVC version the user gets a File.IO permission error. Here's a kicker: I can upload the same file in both versions. The user is in a remote location but I believe they're still inside the network because they can work on the other parts of the application just fine. Possible clues: In the event log there's an info that says Event code:4005 Forms authentication failed for the request. Reason: Ticket supplied was invalid. What would be your initial thoughts on why this is happening? Thanks, Rod.

    Read the article

  • Getting a users Facebook profile url

    - by Greg Pabst
    I am creating a registry site so similar people can find each other easily. I don't want to use Facebook Connect as the primary log in method or use Facebook to store their information. I'll be creating a database on my end to store that info. For security reasons I won't be displaying the users address, phone number or email address so I wanted to provide the next best way for people to connect with each other, this is where Facebook comes in. Normally I would just ask them to type their Facebook URL in a text box but I don't think most people know what their url is which is why I think I need to use Facebook Connect. So here is my idea..when the users signs up there is a check box that when checked signifies they are allowing people to find them on Facebook. I assume once they click the register button that a Facebook Connect popup will show up asking for permission to access their Facebook account. When they "allow" it, then I can get their profile url. All I need is their Facebook profile url, I don't want any other Facebook features or information. Is Facebook Connect the best thing to use for this scenario? Is there an easier way? Several months ago on the Facebook Connect site their used to be examples of doing this, but all the documentation has been rearranged and changed and I can't seem to find the information. Any help you can provide would be great!

    Read the article

  • How do I run a command as a different user from a root cronjob?

    - by rob
    I seem to be stuck between an NFS limitation and a Cron limitation. So I've got root cron (on RHEL5) running a shell script that, among other things, needs to rsync some files over an NFS mount. And the files on the NFS mount are owned by the apache user with mode 700, so only the apache user can run the rsync command -- running as root yields a permission error (NFS being a rare case, apparently, where the root user is not all-powerful?) When I just want to run the rsync by hand, I can use "sudo -u apache rsync ..." But sudo no workie in cron -- it says "sudo: sorry, you must have a tty to run sudo". I don't want to run the whole script as apache (i.e. from apache's crontab) because other parts of the script do require root -- it's just that one command that needs to run as apache. And I would really prefer not to change the mode on the files, as that will involve significant changes to other applications. There's gotta be a way to accomplish "sudo -u apache" from cron?? thanks! rob

    Read the article

  • How to upload files?

    - by Brian Roisentul
    I just wanted to know how to configure FCKEditor to upload files and images to the server where the website is hosted. The relevant part for it's config file(i think) looks like this: FCKConfig.LinkUpload = true ; FCKConfig.LinkUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension ; FCKConfig.LinkUploadAllowedExtensions = ".(7z|aiff|asf|avi|bmp|csv|doc|fla|flv|gif|gz|gzip|jpeg|jpg|mid|mov|mp3|mp4|mpc|mpeg|mpg|ods|odt|pdf|png|ppt|pxd|qt|ram|rar|rm|rmi|rmvb|rtf|sdc|sitd|swf|sxc|sxw|tar|tgz|tif|tiff|txt|vsd|wav|wma|wmv|xls|xml|zip)$" ; // empty for all FCKConfig.LinkUploadDeniedExtensions = "" ; // empty for no one FCKConfig.ImageUpload = true ; FCKConfig.ImageUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension + '?Type=Image' ; FCKConfig.ImageUploadAllowedExtensions = ".(jpg|gif|jpeg|png|bmp)$" ; // empty for all FCKConfig.ImageUploadDeniedExtensions = "" ; // empty for no one Could it be a folder permission problem? Is this part of the config.js alright?

    Read the article

  • Security implications of writing files using PHP

    - by susmits
    I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so: Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea. Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js. I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?

    Read the article

  • facebook: can't send email from app to user

    - by flybywire
    I can't send email to my app users, even though I have the permissions. I am working with the java library, although I don't think it is related to that. long uid = ...; Collection<Long> uids = new ArrayList<Long>(); uids.add(uid); FacebookXmlRestClient client = new FacebookXmlRestClient(api, secret); boolean sendEmailPerm = client.users_hasAppPermission(Permission.EMAIL,uid); System.out.println("Can send email: "+ sendEmailPerm); Collection<String> sent = client.notifications_sendTextEmail(uids, "subject", "body"); System.out.println("Succesfully sent email to: "+sent); sent = client.notifications_sendFbmlEmail(uids, "subject", "body"); System.out.println("Succesfully sent email to: "+sent); I am trying both with fbml and text email. I can also obtain the user's proxied_email property but when I send email to that address with my regular mail client is doesn't arrive. The output is: Can send email: true Succesfully sent email to: [] Succesfully sent email to: []

    Read the article

  • Custom basic authentication fails in IIS7

    - by manu08
    I have an ASP.NET MVC application, with some RESTful services that I'm trying to secure using custom basic authentication (they are authenticated against my own database). I have implemented this by writing an HTTPModule. I have one method attached to the HttpApplication.AuthenticateRequest event, which calls this method in the case of authentication failure: private static void RejectWith401(HttpApplication app) { app.Response.StatusCode = 401; app.Response.StatusDescription = "Access Denied"; app.CompleteRequest(); } This method is attached to the HttpApplication.EndRequest event: public void OnEndRequest(object source, EventArgs eventArgs) { var app = (HttpApplication) source; if (app.Response.StatusCode == 401) { string val = String.Format("Basic Realm=\"{0}\"", "MyCustomBasicAuthentication"); app.Response.AppendHeader("WWW-Authenticate", val); } } This code adds the "WWW-Authenticate" header which tells the browser to throw up the login dialog. This works perfectly when I debug locally using Visual Studio's web server. But it fails when I run it in IIS7. For IIS7 I have the built-in authentication modules all turned off, except anonymous. It still returns an HTTP 401 response, but it appears to be removing the WWW-Authenticate header. Any ideas?

    Read the article

  • Trying to attach a file from SD Card to email

    - by Chrispix
    I am trying to launch an Intent to send an email. All of that works, but when I try to actually send the email a couple 'weird' things happen. here is code Intent sendIntent = new Intent(Intent.ACTION_SEND); sendIntent.setType("image/jpeg"); sendIntent.putExtra(Intent.EXTRA_SUBJECT, "Photo"); sendIntent.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://sdcard/dcim/Camera/filename.jpg")); sendIntent.putExtra(Intent.EXTRA_TEXT, "Enjoy the photo"); startActivity(Intent.createChooser(sendIntent, "Email:")); So if I launch using the Gmail menu context It shows the attachment, lets me type who the email is to, and edit the body & subject. No big deal. I hit send, and it sends. The only thing is the attachment does NOT get sent. So. I figured, why not try it w/ the Email menu context (for my backup email account on my phone). It shows the attachment, but no text at all in the body or subject. When I send it, the attachment sends correctly. That would lead me to believe something is quite wrong. Do I need a new permission in the Manifest launch an intent to send email w/ attachment? What am I doing wrong?

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • How can I fetch Google static maps with TIdHTTP?

    - by cloudstrif3
    I'm trying to return content from maps.google.com from within Delphi 2006 using the TIdHTTP component. My code is as follows procedure TForm1.GetGoogleMap(); var t_GetRequest: String; t_Source: TStringList; t_Stream: TMemoryStream; begin t_Source := TStringList.Create; try t_Stream := TMemoryStream.Create; try t_GetRequest := 'http://maps.google.com/maps/api/staticmap?' + 'center=Brooklyn+Bridge,New+York,NY' + '&zoom=14' + '&size=512x512' + '&maptype=roadmap' + '&markers=color:blue|label:S|40.702147,-74.015794' + '&markers=color:green|label:G|40.711614,-74.012318' + '&markers=color:red|color:red|label:C|40.718217,-73.998284' + '&sensor=false'; IdHTTP1.Post(t_GetRequest, t_Source, t_Stream); t_Stream.SaveToFile('google.html'); finally t_Stream.Free; end; finally t_Source.Free; end; end; However I keep getting the response HTTP/1.0 403 Forbidden. I assume this means that I don't have permission to make this request but if I copy the url into my web browser IE 8, it works fine. Is there some header information that I need or something else?

    Read the article

  • How to disable mod_security2 rule (false positive) for one domain on centos 5

    - by nicholas.alipaz
    Hi I have mod_security enabled on a centos5 server and one of the rules is keeping a user from posting some text on a form. The text is legitimate but it has the words 'create' and an html <table> tag later in it so it is causing a false positive. The error I am receiving is below: [Sun Apr 25 20:36:53 2010] [error] [client 76.171.171.xxx] ModSecurity: Access denied with code 500 (phase 2). Pattern match "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" at ARGS:body. [file "/usr/local/apache/conf/modsec2.user.conf"] [line "352"] [id "300015"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname "www.mysite.com"] [uri "/node/181/edit"] [unique_id "@TaVDEWnlusAABQv9@oAAAAD"] and here is /usr/local/apache/conf/modsec2.user.conf (line 352) #Generic SQL sigs SecRule ARGS "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" "id:1,rev:1,severity:2,msg:'Generic SQL injection protection'" The questions I have are: What should I do to "whitelist" or allow this rule to get through? What file do I create and where? How should I alter this rule? Can I set it to only be allowed for the one domain, since it is the only one having the issue on this dedicated server or is there a better way to exclude table tags perhaps? Thanks guys

    Read the article

  • Just messed up a server misusing chown, how to execute it correctly?

    - by Jack Webb-Heller
    Hi! I'm moving from an old shared host to a dedicated server at MediaTemple. The server is running Plesk CP, but, as far as I can tell, there's no way via the Interface to do what I want to do. On the old shared host, running cPanel, I creative a .zip archive of all the website's files. I downloaded this to my computer, then uploaded it with FTP to the new host account I'd set up. Finally, I logged in via SSH, navigated to the directory the zip was stored in (something like var/www/vhosts/mysite.com/httpdocs/ and ran the unzip command on the file sitearchive.zip. This extracted everything just the fine. The site appeared to work just fine. The problem: When I tried to edit a file through FTP, I got Error - 160: Permission Denied. When I Get Info for the file I'm trying to edit, it says the owner and group is swimwir1. I attemped to use chown at this point to change owner - and yes, as you may be able to tell, I'm a little inexperienced in SSH ;) luckily the server was new, since the command I ran - chown -R newuser / appeared to mess a load of stuff up. The reason I used / on the end rather than /var/www/vhosts/mysite.com/httpdocs/ was because I'd already cded into their, so I presumed the / was relative to where I was working. This may be the case, I have no idea, either way - Plesk was no longer accessible, although Apache and things continued to work. I realised my mistake, and deciding it wasn't worth the hassle of 1) being an amateur and 2) trying to fix it, I just reprovisioned the server to start afresh. So - what do I do to change the owner of these files correctly? Thanks for helping out a confused beginner! Jack

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >