Search Results

Search found 19074 results on 763 pages for 'secure government government cloud security'.

Page 227/763 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point.

    Read the article

  • Should I Use Anchor, Button Or Form Submit For "Follow" Feature In Rails

    - by James
    I am developing an application in Rails 3 using a nosql database. I am trying to add a "Follow" feature similar to twitter or github. In terms of markup, I have determined that there are three ways to do this. 1) Use a regular anchor. (Github Uses This Method) <a href="/users/follow?target=Joe">Follow</a> 2) Use a button. (Twitter Uses This Method) <button href="/friendships/create/">Follow</button> 3) Use a form with a submit button. (Has some advantages for me, but I haven't see anyone do it yet.) <form method="post" id="connection_new" class="connection_new" action="/users/follow"> <input type="hidden" value="60d7b563355243796dd8496e17d36329" name="target" id="target"> <input type="submit" value="Follow" name="commit" id="connection_submit"> </form> Since I want to store the user_id in the database and not the username, options 1 and 2 will force me to do a database query to get the actual user_id, whereas option 3 will allow me to store the user_id in a hidden form field so that I don't have to do any database lookups. I can just get the id from the params hash on form submission. I have successfully got each of these methods working, but I would like to know what is the best way to do this. Which way is more semantic, secure, better for spiders, etc...? Is there a reason both twitter and github don't use forms to do this? Any guidance would be appreciated. I am leaning towards using the form method since then I don't have to query the db to get the id of the user, but I am worried that there must be a reason the big guys are just using anchors or buttons for this. I am a newb so go easy on me if I am totally missing something. Thanks!

    Read the article

  • Building a system that allows users to see a video only once

    - by Bart van Heukelom
    My client wants to distribute a video to some people, specifically car dealers, but he doesn't want the video to end up on Youtube or something like that. Therefore he wants the recipients of the video to be able to see it only once. My idea to implement this is: Generate a unique key per viewer Send each viewer a link to a page with a Flash based video player, with their key in the URL Have Flash get the video from the server. On the server the key is checked and the file sent (using php's readfile or something equivalent). Then the key is invalidated. I was thinking this wouldn't take more than a day to build. I know that if you want somebody to be able to play something, you implicitly give them the power to record it as well, but the client just wants me to make it as hard as possible. Do you think this is secure enough for the intended audience? Anything else I can do to add some security without exceeding the development time of 1 day? I'm also interested in ready made solutions, if they exist.

    Read the article

  • Firing through HTTP a Perl script for sending signals to daemons

    - by Eric Fortis
    Hello guys, I'm using apache2 on Ubuntu. I have a Perl script which basically read the files names of a directory, then rewrites a text file, then sends a signal to a daemon. How can this be done, as secure as possible through a web-page? Actually I can run the code below, but not if I remove the comments. I'm looking for advise considering: Using HTTP Requests? How about Apache file permissions on the directory shown in code? Is htaccess enough to enable user/pass access to the cgi? Should I use a database instead of writing to a file and run a cron querying the db with permission granted to write and send the signal? Granting as less permissions as possible to the webserver. Should I set a VPN? #!/usr/bin/perl -wT use strict; use CGI; #@fileList = </home/user/*>; #read a directory listing my $query = CGI->new(); print $query->header( "text/html" ), $query->p( "FirstFileNameInArray" ), #$query->p( $fileList[0] ), #output the first file in directory $query->end_html;

    Read the article

  • How would a user stay logged in to a REST-based website?

    - by unforgiven3
    A year or so ago I asked this question: Can you help me understand this? “Common REST Mistakes: Sessions are irrelevant”. My question was essentially this: Okay, I get that HTTP authentication is done automatically on every message - but how? Is the username/password sent with every request? Doesn't that just increase attack surface area? I feel like I'm missing part of the puzzle. The answers I received made perfect sense in the context of a mobile (iPhone, Android, WP7) app - when talking to a REST service, the app would just send user credentials along with each request. That worked great for me. But now, I would like to better understand how one would secure a REST-like website, like StackOverflow itself or something like Reddit. How would things work if it was a user logged in via a web browser instead of logged in via an iPhone app? What happens when a user logs in? Are the credentials saved in the browser somehow? How would the browser know what credentials to send with subsequent REST requests? What if it's a JavaScript call to a webservice? How would the JavaScript call include user credentials? I'll be quite frank: my understanding of security when it comes to websites is pretty limited. I enjoyed working with REST services from an app perspective, but now I want to try and build a website that is based on REST principles, and I'm finding myself to be pretty lost. If there is anything in the above question that is unclear that you'd like me to clarify, please leave a comment and I'll address it.

    Read the article

  • Bruteforcing Blackberry PersistentStore?

    - by Haoest
    Hello, I am experimenting with Blackberry's Persistent Store, but I have gotten nowhere so far, which is good, I guess. So I have written a a short program that attempts iterator through 0 to a specific upper bound to search for persisted objects. Blackberry seems to intentionally slow the loop. Check this out: String result = "result: \n"; int ub = 3000; Date start = Calendar.getInstance().getTime(); for(int i=0; i<ub; i++){ PersistentObject o = PersistentStore.getPersistentObject(i); if (o.getContents() != null){ result += (String) o.getContents() + "\n"; } } result += "end result\n"; result += "from 0 to " + ub + " took " + (Calendar.getInstance().getTime().getTime() - start.getTime()) / 1000 + " seconds"; From 0 to 3000 took 20 seconds. Is this enough to conclude that brute-forcing is not a practical method to breach the Blackberry? In general, how secure is BB Persistent Store?

    Read the article

  • How to control access to third party HTML pages

    - by Wylie
    Hello, We have a Learning Management System (LMS) that runs on its own server (IIS/Server 2003). Students must login with Forms authentication to gain access to the content. We want to offer access to third party flash and audio that is embedded in HTML pages hosted on the third party server (IIS/Server 2003). Currently we use a frame in a pop-up window that is populated via a simple URL to the third party HTML pages. How can the third party control access to their content, so that only students who launch the pop-up windows from our site can access their content? Since the content is mostly video and flash, we would prefer not to stream all of their content through our server to the Student. We have a programming staff, so we could maybe... - either post or get for our HTTP request to the third party server - we could use SSL - we could programmatically assign a global NT user account to all of our users and then do some kind of Active Directory login from the LMS server to the third party server - could the third party content be hosted at Amazon S3? Would this allow for secure access/download? These are just ideas. We really have no idea. Any suggestions would be greatly appreciated. TIA, Wylie

    Read the article

  • PHP Mailer Class - Securing Email Credentials

    - by Alan A
    I am using the php mailer class to send email via my scripts. The structure is as follows: $mail = new PHPMailer; $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication $mail->Username = '[email protected]'; // SMTP username $mail->Password = 'user123'; // SMTP password $mail->SMTPSecure = 'pass123'; It seems to me to be a bit of a security hole having the mailbox credentials in plain view. So I thought I might put these in an external file outside of the web root. My question is how would I then assign the $mail object these values. I of course no how to use include and/or requires... would it simple be a case of.... $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication includes '../locationOutsideWebroot/emailCredntials.php'; $mail->SMTPSecure = 'pass123'; Then emailCredentails.php: <?php $mail->Username = '[email protected]'; $mail->Password = 'user123'; ?> Would this be sufficient and secure enough? Thanks, Alan.

    Read the article

  • Hide form if javascript disabled

    - by Kero
    I need to check on disabling JavaScript if the user disabled JavaScript from browser or firewall or any other place he will never show the form. I have lots of search and solutions, but unfortunately didn't got the right one. - Using style with no-script tag: This one could be broke with removing style... <noscript> <style type="text/css"> .HideClass { display:none; } </style> </noscript> The past code will work just fine but there is lots of problems in no-script tag as here Beside that i don't want to redirect user with no-script tag too...Beside that i can quickly stop loading the page to broke this meta or disable Meta tag from IE: <meta http-equiv="refresh" content="0; URL=Frm_JavaScriptDisable.aspx" /> Another way to redirect user with JavaScript but this will work let's say for 99% of users and this one isn't lovely way and will slow down the website... window.location="http://www.location.com/page.aspx"; Is there is any other ideas or suggestions to secure working with JavaScript...and prevent user from entering the website or see my form except when JavaScript enabled...

    Read the article

  • Can I rely on S3 to keep my data secure?

    - by Jamie Hale
    I want to back up sensitive personal data to S3 via an rsync-style interface. I'm currently using s3cmd - a great tool - but it doesn't yet support encrypted syncs. This means that while my data is encrypted (via SSL) during transfer, it's stored on their end unencrypted. I want to know if this is a big deal. The S3 FAQ says "Amazon S3 uses proven cryptographic methods to authenticate users... If you would like extra security, there is no restriction on encrypting your data before storing it in Amazon S3." Why would I like extra security? Is there some way my buckets could be opened to prying eyes without my knowing? Or are they just trying to save you when you accidentally change your ACLs and make your buckets world-readable?

    Read the article

  • Apache and fastcgi - How to secure an Apache server with fastcgi enabled?

    - by skyeagle
    I am running a headless server on Ubuntu 10.x. I am running Apache 2.2. I am writing a fastcgi application for deployment on the server. I remember reading a while back (I could be wrong) that running CGI (and by implication fastcgi) on a server, can provide 'backdoors' for potential attackers - or at the very least, could compromise the server if certain security measurements are not taken. My questions are: what are the security 'gotcha's that I have to be aware of if I am enabling mod_fastcgi on my Apache server? I want to run the fastcgi as a specific user (with restricted access) how do I do this?

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

  • O que é Social Cloud ou computação em nuvem social?

    - by RED League Heroes-Oracle
    A computação em nuvem está criando novas possiblidades para as empresas em seus negócios, como aproximar-se dos clientes através de ferramentas digitais. Cruzar informações do registro dos clientes armazenadas nos servidores da empresa, com informações sociais, ou seja, com informações disponíveis na internet (redes sociais, blogs, geolocalização). Este cruzamento, certamente, pode ajudar a entender melhor o comportamento de seus consumidores e, através destas análises, realizar diferentes ações para estar cada vez mais próximo ou entender novas necessidades. O comportamento de consumo vem se alterando com o avanço da internet e das novas tecnologias. Integrar estas novas tecnologias ao negócio da empresa é uma grande oportunidade para acompanhar os consumidores e observar novos padrões comportamentais. Estes novos padrões podem apresentar novas oportunidades. Utilizar a computação em nuvem para agregar conhecimento adicional aos que já o possuem pode ser uma das chaves para a transformação da realidade da empresa. Atualmente, como seus consumidores se comportam? O que eles costumam fazer? Viajam mensalmente? Possuem filhos? Estão em busca de novos produtos? Quais produtos buscam? Onde a maior parte de seus consumidores está na hora do lazer? Saber onde estão na hora do lazer pode ser uma ótima oportunidade para que sua marca seja vista! Como você endereça hoje essas questões? As soluções de Social Cloud da Oracle podem te auxiliar! Aproveite e baixe gratuitamenteo e-book – Simplifique sua mobilidade empresarial. E conheça o poder transformacional da mobilidade em seu negócio. LINK PARA DOWNLOAD: http://bit.ly/e-bookmobilidade

    Read the article

  • Platform for Efficiency: Boeing Defense, Space & Security integrates supply chain processes using Oracle Business Process Management solutions. by Fred Sandsmark

    - by JuergenKress
    Like most companies, aerospace giant Boeing has its jargon - words and phrases that uniquely define its products and processes. Take the word platform. It is used at Boeing to mean a family of aircraft - the F/A-18 fighter, for example, or the 777 jetliner. Boeing Defense, Space & Security since August 2009, employees in the Global Services & Support (GS&S) division of Boeing Defense, Space & Security have been talking about a different sort of platform: a supply chain technology platform, based on Oracle Business Process Management (Oracle BPM) solutions and Oracle SOA Suite. That platform, built with the assistance of Oracle Diamond Partner Capgemini, is serving as a jumping-off point for Boeing's GS&S staff to deploy radically improved business processes supported by Oracle Fusion Applications to build a high-visibility, end-to-end supply chain. This business process-driven technology platform has ambitious goals: to help GS&S respond more quickly and accurately to its customers' needs, to make business processes at all GS&S sites more consistent and less expensive, and to create a foundation for further improvement and efficiency. Read the full article here. Want to publish your BPM11g success story - request for a partner/customer reference? BPM Center of Excellent & First 100 Days of BPM documents to our SOA Community Workspace MWD_bpm_si_Centre_of_Excellence_0811.pdf First 100 Days of BPM whitepaper.pdf Please visit our SOA Community Workspace (SOA Community membership required). SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BPM reference,BPM Capgemini,BPM first 100 days,BPM center of Excellence,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • How Security Products Are Made; An Interview with BitDefender

    - by Jason Fitzpatrick
    Most of us use anti-virus and malware scanners, without giving the processes behind their construction and deployment much of a thought. Get an inside look at security product development with this BitDefender interview. Over at 7Tutorials they took a trip to the home offices of BitDefender for an interview with Catalin Co?oi–seen here–BitDefender’s Chief Security Researcher. While it’s notably BitDefender-centric, it’s also an interesting look at the methodology employed by a company specializing in virus/malware protection. Here’s an excerpt from the discussion about data gathering techniques: Honeypots are systems we distributed across our network, that act as victims. Their role is to look like vulnerable targets, which have valuable data on them. We monitor these honeypots continuously and collect all kinds of malware and information about black hat activities. Another thing we do, is broadcast fake e-mail addresses that are automatically collected by spammers from the Internet. Then, they use these addresses to distribute spam, malware or phishing e-mails. We collect all the messages we receive on these addresses, analyze them and extract the required data to update our products and keep our users secure and spam free. Hit up the link below for the full interview. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Unable to get HTTPS MEX endpoint to work

    - by Rahul
    I have been trying to configure WCF to work with Azure ACS. This WCF configuration has 2 bugs: It does not publish MEX end point. It does not invoke custom behaviour extension. (It just stopped doing that after I made some changes which I can't remember) What could be possibly wrong here? <configuration> <configSections> <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <location path="FederationMetadata"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> </system.web> <system.serviceModel> <services> <service name="production" behaviorConfiguration="AccessServiceBehavior"> <endpoint contract="IMetadataExchange" binding="mexHttpsBinding" address="mex" /> <endpoint address="" binding="customBinding" contract="Samples.RoleBasedAccessControl.Service.IService1" bindingConfiguration="serviceBinding" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="AccessServiceBehavior"> <federatedServiceHostConfiguration /> <sessionExtension/> <useRequestHeadersForMetadataAddress> <defaultPorts> <add scheme="http" port="8000" /> <add scheme="https" port="8443" /> </defaultPorts> </useRequestHeadersForMetadataAddress> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <!--Certificate added by FedUtil. Subject='CN=DefaultApplicationCertificate', Issuer='CN=DefaultApplicationCertificate'.--> <serviceCertificate findValue="XXXXXXXXXXXXXXX" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> <extensions> <behaviorExtensions> <add name="sessionExtension" type="Samples.RoleBasedAccessControl.Service.RsaSessionServiceBehaviorExtension, Samples.RoleBasedAccessControl.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <add name="federatedServiceHostConfiguration" type="Microsoft.IdentityModel.Configuration.ConfigureServiceHostBehaviorExtensionElement, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </behaviorExtensions> </extensions> <protocolMapping> <add scheme="http" binding="customBinding" bindingConfiguration="serviceBinding" /> <add scheme="https" binding="customBinding" bindingConfiguration="serviceBinding"/> </protocolMapping> <bindings> <customBinding> <binding name="serviceBinding"> <security authenticationMode="SecureConversation" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10" requireSecurityContextCancellation="false"> <secureConversationBootstrap authenticationMode="IssuedTokenOverTransport" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"> <issuedTokenParameters> <additionalRequestParameters> <AppliesTo xmlns="http://schemas.xmlsoap.org/ws/2004/09/policy"> <EndpointReference xmlns="http://www.w3.org/2005/08/addressing"> <Address>https://127.0.0.1:81/</Address> </EndpointReference> </AppliesTo> </additionalRequestParameters> <claimTypeRequirements> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" isOptional="true" /> <add claimType="http://schemas.microsoft.com/ws/2008/06/identity/claims/role" isOptional="true" /> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" isOptional="true" /> <add claimType="http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider" isOptional="true" /> </claimTypeRequirements> <issuerMetadata address="https://XXXXYYYY.accesscontrol.windows.net/v2/wstrust/mex" /> </issuedTokenParameters> </secureConversationBootstrap> </security> <httpsTransport /> </binding> </customBinding> </bindings> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <microsoft.identityModel> <service> <audienceUris> <add value="http://127.0.0.1:81/" /> </audienceUris> <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <trustedIssuers> <add thumbprint="THUMBPRINT HERE" name="https://XXXYYYY.accesscontrol.windows.net/" /> </trustedIssuers> </issuerNameRegistry> <certificateValidation certificateValidationMode="None" /> </service> </microsoft.identityModel> <appSettings> <add key="FederationMetadataLocation" value="https://XXXYYYY.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml " /> </appSettings> </configuration> Edit: Further implementation details I have the following Behaviour Extension Element (which is not getting invoked currently) public class RsaSessionServiceBehaviorExtension : BehaviorExtensionElement { public override Type BehaviorType { get { return typeof(RsaSessionServiceBehavior); } } protected override object CreateBehavior() { return new RsaSessionServiceBehavior(); } } The namespaces and assemblies are correct in the config. There is more code involved for checking token validation, but in my opinion at least MEX should get published and CreateBehavior() should get invoked in order for me to proceed further.

    Read the article

  • Week in Geek: Windows 8 Security Flaw – Passwords Stored in Plain Text When Using Picture or PIN Login

    - by Asian Angel
    This week’s edition of WIG is filled with news link coverage on topics such as new malware seeks to lock Skype users out of their PCs, Dell will stick to Windows 7 after Windows 8 debut, Mozilla Thunderbird users now get 25 GB of cloud storage for free, and more. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • input type file alternative and file upload best practice

    - by Ioxp
    Background: I am working on a file upload page that will extend an existing web portal. This page will allow for an end user to upload files from there local computer to our network (the files will not be stored on the web server, rather a remote workstation). The end user will have the ability to view the data that they have submitted by hyper-linking the files that have been uploaded on this page. Question 1: Is there an ASP.net alternative to the <input type="file" runat="server" /> HTML tag? The reason for asking is i would rather use an image button and display the file as an asp label on the portal to keep with a consistent style. Question 2: So i understand that giving the end user the ability to upload files to the server and then turn around to show them the data that they posted poses a security threat. So far i am using the id.PostedFile.ContentType and the file extension to reject the data if its not an accepted format (i.e. "text/plain", "application/pdf", "application/vnd.ms-excel", or "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"). Also the location where the files are uploaded to has a sufficient amount of virus and malware protection and this is not a concern. What, from the C# point of view, additional steps should i take to ensure that the end user cant take advantage and compromise the system in regards to allowing them to upload files?

    Read the article

  • How to authenticate a Windows Mobile client calling web services in a Web App

    - by cdonner
    I have a fairly complex business application written in ASP.NET that is deployed on a hosted server. The site uses Forms Authentication, and there are about a dozen different roles defined. Employees and customers are both users of the application. Now I have the requirement to develop a Windows Mobile client for the application that allows a very specialized set of tasks to be performed from a device, as opposed to a browser on a laptop. The client wants to increase productivity with this measure. Only employees will use this application. I feel that it would make sense to re-use the security infrastructure that is already in place. The client does not need offline capability. My thought is to deploy a set of web services to a folder of the existing site that only the new role "web service" has access to, and to use Forms Authentication (from a Windows Mobile 5/.Net 3.5 client). Can I do that, is that a good idea, and are there any code examples/references that you can point me to?

    Read the article

  • Applying business logic to form elements in ASP.NET MVC

    - by Brettski
    I am looking for best practices in applying business logic to form elements in an ASP.NET MVC application. I assume the concepts would apply to most MVC patterns. The goal is to have all the business logic stem from the same place. I have a basic form with four elements: Textbox: for entering data Checkbox: for staff approval Checkbox: for client approval Button: for submitting form The textbox and two check boxes are fields in a database accessed using LINQ to SQL. What I want to do is put logic around the check boxes on who can check them and when. True table (little silly but it's an example): when checked || may check Staff || may check Client Staff | Client || Staff | Client || Staff | Client 0 0 || 1 0 0 1 0 1 || 0 0 0 1 1 0 || 1 0 0 1 1 1 || 0 0 0 1 There are to security roles, staff and client; a person's role determines who they are, the roles are maintained in the database alone with current state of the check boxes. So I can simply store the users roll in the view class and enable and disable check boxes based on their role, but this doesn't seem proper. That is putting logic in UI to control of which actions can be taken. How do I get most of this control down into the model? I mean I need to control which check boxes are enabled and then check the results in the model when the form is posted, so it seems the best place for it to originate. I am looking for a good approach to constructing this, something to follow as I build the application. If you know of some great references which explain these best practices that is really appreciated too.

    Read the article

  • WCF Service Impersonation

    - by robalot
    Good Day Everyone... Apparently, I'm not setting-up impersonation correctly for my WCF service. I do NOT want to set security on a method-by-method basis (in the actual code-behind). The service (at the moment) is open to be called by everyone on the intranet. So my questions are… Q: What web-config tags am I missing? Q: What do I need to change in the web-config to make impersonation work? The Service Web.config Looks Like... <configuration> <system.web> <authorization> <allow users="?"/> </authorization> <authentication mode="Windows"/> <identity impersonate="true" userName="MyDomain\MyUser" password="MyPassword"/> </system.web> <system.serviceModel> <services> <service behaviorConfiguration="wcfFISH.DataServiceBehavior" name="wcfFISH.DataService"> <endpoint address="" binding="wsHttpBinding" contract="wcfFISH.IFishData"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="wcfFISH.DataServiceBehavior"> <serviceMetadata httpGetEnabled="false"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration>

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >